Test Report: Docker_Linux_crio 20053

                    
                      ee589ed5f2e38de21e277596fb8e32edfda5a06e:2024-12-05:37358
                    
                

Test fail (13/329)

x
+
TestAddons/parallel/Ingress (491.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-583828 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-583828 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-583828 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [06f0ad05-fff2-461e-9051-b1a79714bd25] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:250: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:250: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-583828 -n addons-583828
addons_test.go:250: TestAddons/parallel/Ingress: showing logs for failed pods as of 2024-12-05 20:36:19.479083626 +0000 UTC m=+713.286918433
addons_test.go:250: (dbg) Run:  kubectl --context addons-583828 describe po nginx -n default
addons_test.go:250: (dbg) kubectl --context addons-583828 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-583828/192.168.49.2
Start Time:       Thu, 05 Dec 2024 20:28:19 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.31
IPs:
IP:  10.244.0.31
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2wdtd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2wdtd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  8m                     default-scheduler  Successfully assigned default/nginx to addons-583828
Warning  Failed     7m27s                  kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     6m45s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    4m24s (x4 over 8m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     3m23s (x4 over 7m27s)  kubelet            Error: ErrImagePull
Warning  Failed     3m23s (x2 over 5m16s)  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    2m59s (x7 over 7m27s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2m59s (x7 over 7m27s)  kubelet            Error: ImagePullBackOff
addons_test.go:250: (dbg) Run:  kubectl --context addons-583828 logs nginx -n default
addons_test.go:250: (dbg) Non-zero exit: kubectl --context addons-583828 logs nginx -n default: exit status 1 (69.742915ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:250: kubectl --context addons-583828 logs nginx -n default: exit status 1
addons_test.go:251: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-583828
helpers_test.go:235: (dbg) docker inspect addons-583828:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23a3cfafc9ea2e4a4989172f2e090a0ee839d5066be84b1bc6d50704ff2f896c",
	        "Created": "2024-12-05T20:25:03.731974458Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 832431,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-05T20:25:03.852223097Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/23a3cfafc9ea2e4a4989172f2e090a0ee839d5066be84b1bc6d50704ff2f896c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23a3cfafc9ea2e4a4989172f2e090a0ee839d5066be84b1bc6d50704ff2f896c/hostname",
	        "HostsPath": "/var/lib/docker/containers/23a3cfafc9ea2e4a4989172f2e090a0ee839d5066be84b1bc6d50704ff2f896c/hosts",
	        "LogPath": "/var/lib/docker/containers/23a3cfafc9ea2e4a4989172f2e090a0ee839d5066be84b1bc6d50704ff2f896c/23a3cfafc9ea2e4a4989172f2e090a0ee839d5066be84b1bc6d50704ff2f896c-json.log",
	        "Name": "/addons-583828",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-583828:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-583828",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/41afe785c57b7f74990df950dd572a0d9a1bcbca1dc031bd09c84e239db1fcf7-init/diff:/var/lib/docker/overlay2/0f5bc7fa09e0d0f29301db80becc3339e358e049d584dfb307a79bde49527770/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41afe785c57b7f74990df950dd572a0d9a1bcbca1dc031bd09c84e239db1fcf7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41afe785c57b7f74990df950dd572a0d9a1bcbca1dc031bd09c84e239db1fcf7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41afe785c57b7f74990df950dd572a0d9a1bcbca1dc031bd09c84e239db1fcf7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-583828",
	                "Source": "/var/lib/docker/volumes/addons-583828/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-583828",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-583828",
	                "name.minikube.sigs.k8s.io": "addons-583828",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed8e9041c46e14e30b26a0df885e68a7c08fd77cec87c90c7104a1f8f7ab0f11",
	            "SandboxKey": "/var/run/docker/netns/ed8e9041c46e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-583828": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "4f3a539ee7c46d697dbcb6db4f5ef0224be703b3ddf3422109c24e64c1203597",
	                    "EndpointID": "9cfe539f71d453e8e613ddcbc480d964c3fe844770a905f4c038b3334cdb549c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-583828",
	                        "23a3cfafc9ea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-583828 -n addons-583828
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-583828 logs -n 25: (1.170197088s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-350205                                                                     | download-only-350205   | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC | 05 Dec 24 20:24 UTC |
	| delete  | -p download-only-949612                                                                     | download-only-949612   | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC | 05 Dec 24 20:24 UTC |
	| start   | --download-only -p                                                                          | download-docker-384641 | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC |                     |
	|         | download-docker-384641                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-384641                                                                   | download-docker-384641 | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC | 05 Dec 24 20:24 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-451629   | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC |                     |
	|         | binary-mirror-451629                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40015                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-451629                                                                     | binary-mirror-451629   | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC | 05 Dec 24 20:24 UTC |
	| addons  | disable dashboard -p                                                                        | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC |                     |
	|         | addons-583828                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC |                     |
	|         | addons-583828                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-583828 --wait=true                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC | 05 Dec 24 20:27 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-583828 addons disable                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-583828 addons disable                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | -p addons-583828                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-583828 addons disable                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-583828 addons disable                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-583828 ip                                                                            | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	| addons  | addons-583828 addons disable                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-583828 ssh cat                                                                       | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | /opt/local-path-provisioner/pvc-7e18edaf-3638-4016-8b18-2b20bbc1377b_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-583828 addons disable                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:28 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-583828 addons disable                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-583828 addons                                                                        | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:28 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-583828 addons                                                                        | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:28 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-583828 addons                                                                        | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:28 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-583828 addons                                                                        | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:28 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-583828 addons                                                                        | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:28 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-583828 addons                                                                        | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:32 UTC | 05 Dec 24 20:32 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:24:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:24:39.691689  831680 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:24:39.691822  831680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:24:39.691831  831680 out.go:358] Setting ErrFile to fd 2...
	I1205 20:24:39.691836  831680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:24:39.692053  831680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
	I1205 20:24:39.692671  831680 out.go:352] Setting JSON to false
	I1205 20:24:39.693712  831680 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11229,"bootTime":1733419051,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:24:39.693778  831680 start.go:139] virtualization: kvm guest
	I1205 20:24:39.696017  831680 out.go:177] * [addons-583828] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:24:39.697327  831680 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:24:39.697325  831680 notify.go:220] Checking for updates...
	I1205 20:24:39.699990  831680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:24:39.701330  831680 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig
	I1205 20:24:39.702525  831680 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube
	I1205 20:24:39.703779  831680 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:24:39.705057  831680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:24:39.706350  831680 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:24:39.728535  831680 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 20:24:39.728623  831680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:24:39.774619  831680 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:46 SystemTime:2024-12-05 20:24:39.765310734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:24:39.774735  831680 docker.go:318] overlay module found
	I1205 20:24:39.776947  831680 out.go:177] * Using the docker driver based on user configuration
	I1205 20:24:39.778398  831680 start.go:297] selected driver: docker
	I1205 20:24:39.778411  831680 start.go:901] validating driver "docker" against <nil>
	I1205 20:24:39.778423  831680 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:24:39.779287  831680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:24:39.824840  831680 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:46 SystemTime:2024-12-05 20:24:39.816379028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:24:39.825032  831680 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:24:39.825280  831680 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:24:39.826853  831680 out.go:177] * Using Docker driver with root privileges
	I1205 20:24:39.828291  831680 cni.go:84] Creating CNI manager for ""
	I1205 20:24:39.828357  831680 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 20:24:39.828370  831680 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 20:24:39.828426  831680 start.go:340] cluster config:
	{Name:addons-583828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-583828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:24:39.829707  831680 out.go:177] * Starting "addons-583828" primary control-plane node in "addons-583828" cluster
	I1205 20:24:39.830844  831680 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 20:24:39.832294  831680 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1205 20:24:39.833408  831680 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:24:39.833446  831680 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-823623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:24:39.833457  831680 cache.go:56] Caching tarball of preloaded images
	I1205 20:24:39.833500  831680 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1205 20:24:39.833554  831680 preload.go:172] Found /home/jenkins/minikube-integration/20053-823623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:24:39.833568  831680 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:24:39.833952  831680 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/config.json ...
	I1205 20:24:39.833979  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/config.json: {Name:mka9ab8b23a164b9c916173a422ec994cf906b5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:24:39.849777  831680 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1205 20:24:39.849960  831680 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1205 20:24:39.849979  831680 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1205 20:24:39.849985  831680 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1205 20:24:39.850000  831680 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1205 20:24:39.850012  831680 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from local cache
	I1205 20:24:51.691581  831680 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from cached tarball
	I1205 20:24:51.691629  831680 cache.go:194] Successfully downloaded all kic artifacts
	I1205 20:24:51.691666  831680 start.go:360] acquireMachinesLock for addons-583828: {Name:mk4ded944d810c830c5a1bda8a8a9c5dc897e3c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:24:51.691784  831680 start.go:364] duration metric: took 81.79µs to acquireMachinesLock for "addons-583828"
	I1205 20:24:51.691809  831680 start.go:93] Provisioning new machine with config: &{Name:addons-583828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-583828 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:24:51.691877  831680 start.go:125] createHost starting for "" (driver="docker")
	I1205 20:24:51.693764  831680 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1205 20:24:51.694034  831680 start.go:159] libmachine.API.Create for "addons-583828" (driver="docker")
	I1205 20:24:51.694083  831680 client.go:168] LocalClient.Create starting
	I1205 20:24:51.694178  831680 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca.pem
	I1205 20:24:51.945489  831680 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/cert.pem
	I1205 20:24:52.057379  831680 cli_runner.go:164] Run: docker network inspect addons-583828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 20:24:52.074202  831680 cli_runner.go:211] docker network inspect addons-583828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 20:24:52.074333  831680 network_create.go:284] running [docker network inspect addons-583828] to gather additional debugging logs...
	I1205 20:24:52.074367  831680 cli_runner.go:164] Run: docker network inspect addons-583828
	W1205 20:24:52.090008  831680 cli_runner.go:211] docker network inspect addons-583828 returned with exit code 1
	I1205 20:24:52.090114  831680 network_create.go:287] error running [docker network inspect addons-583828]: docker network inspect addons-583828: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-583828 not found
	I1205 20:24:52.090155  831680 network_create.go:289] output of [docker network inspect addons-583828]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-583828 not found
	
	** /stderr **
	I1205 20:24:52.090263  831680 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 20:24:52.107649  831680 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001710900}
	I1205 20:24:52.107715  831680 network_create.go:124] attempt to create docker network addons-583828 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 20:24:52.107780  831680 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-583828 addons-583828
	I1205 20:24:52.173271  831680 network_create.go:108] docker network addons-583828 192.168.49.0/24 created
	I1205 20:24:52.173316  831680 kic.go:121] calculated static IP "192.168.49.2" for the "addons-583828" container
	I1205 20:24:52.173393  831680 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 20:24:52.189532  831680 cli_runner.go:164] Run: docker volume create addons-583828 --label name.minikube.sigs.k8s.io=addons-583828 --label created_by.minikube.sigs.k8s.io=true
	I1205 20:24:52.207311  831680 oci.go:103] Successfully created a docker volume addons-583828
	I1205 20:24:52.207412  831680 cli_runner.go:164] Run: docker run --rm --name addons-583828-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-583828 --entrypoint /usr/bin/test -v addons-583828:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
	I1205 20:24:59.073934  831680 cli_runner.go:217] Completed: docker run --rm --name addons-583828-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-583828 --entrypoint /usr/bin/test -v addons-583828:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib: (6.866474622s)
	I1205 20:24:59.073970  831680 oci.go:107] Successfully prepared a docker volume addons-583828
	I1205 20:24:59.073993  831680 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:24:59.074022  831680 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 20:24:59.074089  831680 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20053-823623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-583828:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 20:25:03.666897  831680 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20053-823623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-583828:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.592733481s)
	I1205 20:25:03.666932  831680 kic.go:203] duration metric: took 4.592908745s to extract preloaded images to volume ...
	W1205 20:25:03.667072  831680 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 20:25:03.667179  831680 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 20:25:03.716322  831680 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-583828 --name addons-583828 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-583828 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-583828 --network addons-583828 --ip 192.168.49.2 --volume addons-583828:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
	I1205 20:25:04.007595  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Running}}
	I1205 20:25:04.026649  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:04.046749  831680 cli_runner.go:164] Run: docker exec addons-583828 stat /var/lib/dpkg/alternatives/iptables
	I1205 20:25:04.089416  831680 oci.go:144] the created container "addons-583828" has a running status.
	I1205 20:25:04.089450  831680 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa...
	I1205 20:25:04.279308  831680 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 20:25:04.299851  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:04.326851  831680 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 20:25:04.326876  831680 kic_runner.go:114] Args: [docker exec --privileged addons-583828 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 20:25:04.422075  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:04.445043  831680 machine.go:93] provisionDockerMachine start ...
	I1205 20:25:04.445162  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:04.468182  831680 main.go:141] libmachine: Using SSH client type: native
	I1205 20:25:04.468436  831680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1205 20:25:04.468454  831680 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:25:04.688611  831680 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-583828
	
	I1205 20:25:04.688657  831680 ubuntu.go:169] provisioning hostname "addons-583828"
	I1205 20:25:04.688741  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:04.708383  831680 main.go:141] libmachine: Using SSH client type: native
	I1205 20:25:04.708602  831680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1205 20:25:04.708622  831680 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-583828 && echo "addons-583828" | sudo tee /etc/hostname
	I1205 20:25:04.853379  831680 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-583828
	
	I1205 20:25:04.853468  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:04.871719  831680 main.go:141] libmachine: Using SSH client type: native
	I1205 20:25:04.871919  831680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1205 20:25:04.871937  831680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-583828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-583828/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-583828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:25:05.001305  831680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:25:05.001336  831680 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20053-823623/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-823623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-823623/.minikube}
	I1205 20:25:05.001367  831680 ubuntu.go:177] setting up certificates
	I1205 20:25:05.001381  831680 provision.go:84] configureAuth start
	I1205 20:25:05.001440  831680 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-583828
	I1205 20:25:05.019055  831680 provision.go:143] copyHostCerts
	I1205 20:25:05.019139  831680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-823623/.minikube/ca.pem (1078 bytes)
	I1205 20:25:05.019282  831680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-823623/.minikube/cert.pem (1123 bytes)
	I1205 20:25:05.019349  831680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-823623/.minikube/key.pem (1679 bytes)
	I1205 20:25:05.019399  831680 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-823623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca-key.pem org=jenkins.addons-583828 san=[127.0.0.1 192.168.49.2 addons-583828 localhost minikube]
	I1205 20:25:05.117161  831680 provision.go:177] copyRemoteCerts
	I1205 20:25:05.117249  831680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:25:05.117301  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:05.135132  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:05.230225  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:25:05.253790  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 20:25:05.277353  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:25:05.300966  831680 provision.go:87] duration metric: took 299.563049ms to configureAuth
	I1205 20:25:05.301009  831680 ubuntu.go:193] setting minikube options for container-runtime
	I1205 20:25:05.301200  831680 config.go:182] Loaded profile config "addons-583828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:25:05.301314  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:05.319855  831680 main.go:141] libmachine: Using SSH client type: native
	I1205 20:25:05.320072  831680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1205 20:25:05.320102  831680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:25:05.544675  831680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:25:05.544704  831680 machine.go:96] duration metric: took 1.099635008s to provisionDockerMachine
	I1205 20:25:05.544716  831680 client.go:171] duration metric: took 13.850623198s to LocalClient.Create
	I1205 20:25:05.544734  831680 start.go:167] duration metric: took 13.850702137s to libmachine.API.Create "addons-583828"
	I1205 20:25:05.544744  831680 start.go:293] postStartSetup for "addons-583828" (driver="docker")
	I1205 20:25:05.544761  831680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:25:05.544838  831680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:25:05.544881  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:05.562988  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:05.658728  831680 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:25:05.662233  831680 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 20:25:05.662282  831680 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 20:25:05.662290  831680 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 20:25:05.662298  831680 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1205 20:25:05.662313  831680 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-823623/.minikube/addons for local assets ...
	I1205 20:25:05.662379  831680 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-823623/.minikube/files for local assets ...
	I1205 20:25:05.662403  831680 start.go:296] duration metric: took 117.647983ms for postStartSetup
	I1205 20:25:05.662708  831680 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-583828
	I1205 20:25:05.681670  831680 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/config.json ...
	I1205 20:25:05.681981  831680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:25:05.682063  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:05.700607  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:05.790218  831680 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 20:25:05.794810  831680 start.go:128] duration metric: took 14.102914635s to createHost
	I1205 20:25:05.794840  831680 start.go:83] releasing machines lock for "addons-583828", held for 14.103043196s
	I1205 20:25:05.794925  831680 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-583828
	I1205 20:25:05.812280  831680 ssh_runner.go:195] Run: cat /version.json
	I1205 20:25:05.812352  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:05.812356  831680 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:25:05.812411  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:05.832282  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:05.832657  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:05.998926  831680 ssh_runner.go:195] Run: systemctl --version
	I1205 20:25:06.003471  831680 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:25:06.145021  831680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:25:06.149896  831680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:25:06.169829  831680 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 20:25:06.169931  831680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:25:06.199311  831680 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1205 20:25:06.199341  831680 start.go:495] detecting cgroup driver to use...
	I1205 20:25:06.199384  831680 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 20:25:06.199457  831680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:25:06.215640  831680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:25:06.226828  831680 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:25:06.226899  831680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:25:06.239972  831680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:25:06.254908  831680 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:25:06.331418  831680 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:25:06.416499  831680 docker.go:233] disabling docker service ...
	I1205 20:25:06.416577  831680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:25:06.436381  831680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:25:06.448155  831680 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:25:06.531116  831680 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:25:06.612073  831680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:25:06.623275  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:25:06.639478  831680 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:25:06.639550  831680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:25:06.650274  831680 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:25:06.650360  831680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:25:06.660543  831680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:25:06.670851  831680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:25:06.681702  831680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:25:06.692044  831680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:25:06.702284  831680 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:25:06.718572  831680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:25:06.728485  831680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:25:06.736834  831680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:25:06.745242  831680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:25:06.822121  831680 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:25:06.931562  831680 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:25:06.931665  831680 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:25:06.935408  831680 start.go:563] Will wait 60s for crictl version
	I1205 20:25:06.935472  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:25:06.938794  831680 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:25:06.973589  831680 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 20:25:06.973672  831680 ssh_runner.go:195] Run: crio --version
	I1205 20:25:07.010037  831680 ssh_runner.go:195] Run: crio --version
	I1205 20:25:07.047228  831680 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1205 20:25:07.048570  831680 cli_runner.go:164] Run: docker network inspect addons-583828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 20:25:07.066157  831680 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 20:25:07.070191  831680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:25:07.082490  831680 kubeadm.go:883] updating cluster {Name:addons-583828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-583828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:25:07.082616  831680 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:25:07.082667  831680 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:25:07.151041  831680 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:25:07.151071  831680 crio.go:433] Images already preloaded, skipping extraction
	I1205 20:25:07.151130  831680 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:25:07.184077  831680 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:25:07.184107  831680 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:25:07.184119  831680 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1205 20:25:07.184245  831680 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-583828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-583828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:25:07.184329  831680 ssh_runner.go:195] Run: crio config
	I1205 20:25:07.228423  831680 cni.go:84] Creating CNI manager for ""
	I1205 20:25:07.228448  831680 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 20:25:07.228461  831680 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:25:07.228484  831680 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-583828 NodeName:addons-583828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:25:07.228634  831680 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-583828"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:25:07.228702  831680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:25:07.237934  831680 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:25:07.238017  831680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:25:07.246661  831680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1205 20:25:07.264254  831680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:25:07.281467  831680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1205 20:25:07.298986  831680 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 20:25:07.302842  831680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:25:07.313729  831680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:25:07.395631  831680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:25:07.408919  831680 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828 for IP: 192.168.49.2
	I1205 20:25:07.408951  831680 certs.go:194] generating shared ca certs ...
	I1205 20:25:07.408976  831680 certs.go:226] acquiring lock for ca certs: {Name:mke4ccebecd1ee68171cc800d6bc3abd7616bf4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:07.409166  831680 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-823623/.minikube/ca.key
	I1205 20:25:07.666515  831680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-823623/.minikube/ca.crt ...
	I1205 20:25:07.666557  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/ca.crt: {Name:mk4ca2ecc886e49fb3989918896448d71f14a1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:07.666785  831680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-823623/.minikube/ca.key ...
	I1205 20:25:07.666804  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/ca.key: {Name:mka1a40173cbae49266cc33991920a68d9bf7a4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:07.666921  831680 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-823623/.minikube/proxy-client-ca.key
	I1205 20:25:07.814752  831680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-823623/.minikube/proxy-client-ca.crt ...
	I1205 20:25:07.814788  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/proxy-client-ca.crt: {Name:mk32392bb439f48ba844502d0094f45eb93fca5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:07.814971  831680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-823623/.minikube/proxy-client-ca.key ...
	I1205 20:25:07.814989  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/proxy-client-ca.key: {Name:mk64c94fe082a3c8b3a5df5322d4c77c5d5d4b06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:07.815096  831680 certs.go:256] generating profile certs ...
	I1205 20:25:07.815177  831680 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.key
	I1205 20:25:07.815199  831680 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt with IP's: []
	I1205 20:25:07.887110  831680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt ...
	I1205 20:25:07.887153  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: {Name:mkecdb5815ddd7a55b990e08588fa22218865530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:07.887362  831680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.key ...
	I1205 20:25:07.887378  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.key: {Name:mk2cad970dc24ba84a4a459836b2a00bc1082777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:07.887479  831680 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.key.0713d799
	I1205 20:25:07.887505  831680 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.crt.0713d799 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1205 20:25:08.357138  831680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.crt.0713d799 ...
	I1205 20:25:08.357183  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.crt.0713d799: {Name:mk5fc20678d44d41d46ac5c2e916ba4d3d960aa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:08.357402  831680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.key.0713d799 ...
	I1205 20:25:08.357423  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.key.0713d799: {Name:mkd526552707e9e1af645510765abe85e1843157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:08.357531  831680 certs.go:381] copying /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.crt.0713d799 -> /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.crt
	I1205 20:25:08.357637  831680 certs.go:385] copying /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.key.0713d799 -> /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.key
	I1205 20:25:08.357718  831680 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.key
	I1205 20:25:08.357749  831680 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.crt with IP's: []
	I1205 20:25:08.491735  831680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.crt ...
	I1205 20:25:08.491777  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.crt: {Name:mk867dc812f11a9b557ceea6008c3c6754041c1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:08.491988  831680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.key ...
	I1205 20:25:08.492008  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.key: {Name:mkbfb5c2f2f80b7f1a012de232e9db115e5277b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:08.492227  831680 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:25:08.492286  831680 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:25:08.492327  831680 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:25:08.492375  831680 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/key.pem (1679 bytes)
	I1205 20:25:08.493102  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:25:08.517914  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:25:08.542354  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:25:08.566777  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:25:08.590879  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 20:25:08.614464  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:25:08.637824  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:25:08.661533  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:25:08.685220  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:25:08.709327  831680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:25:08.726772  831680 ssh_runner.go:195] Run: openssl version
	I1205 20:25:08.732371  831680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:25:08.741798  831680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:25:08.745305  831680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:25:08.745365  831680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:25:08.752220  831680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:25:08.761740  831680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:25:08.765244  831680 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:25:08.765293  831680 kubeadm.go:392] StartCluster: {Name:addons-583828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-583828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:25:08.765382  831680 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:25:08.765432  831680 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:25:08.800348  831680 cri.go:89] found id: ""
	I1205 20:25:08.800421  831680 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:25:08.809313  831680 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:25:08.817974  831680 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1205 20:25:08.818040  831680 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:25:08.826466  831680 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:25:08.826491  831680 kubeadm.go:157] found existing configuration files:
	
	I1205 20:25:08.826549  831680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:25:08.835135  831680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:25:08.835190  831680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:25:08.843630  831680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:25:08.852236  831680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:25:08.852317  831680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:25:08.861946  831680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:25:08.871304  831680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:25:08.871377  831680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:25:08.880583  831680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:25:08.889443  831680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:25:08.889510  831680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:25:08.897486  831680 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 20:25:08.935422  831680 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:25:08.935533  831680 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:25:08.952942  831680 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1205 20:25:08.953064  831680 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1205 20:25:08.953117  831680 kubeadm.go:310] OS: Linux
	I1205 20:25:08.953192  831680 kubeadm.go:310] CGROUPS_CPU: enabled
	I1205 20:25:08.953247  831680 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1205 20:25:08.953289  831680 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1205 20:25:08.953335  831680 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1205 20:25:08.953378  831680 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1205 20:25:08.953458  831680 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1205 20:25:08.953528  831680 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1205 20:25:08.953607  831680 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1205 20:25:08.953661  831680 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1205 20:25:09.006919  831680 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:25:09.007055  831680 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:25:09.007189  831680 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:25:09.014151  831680 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:25:09.017218  831680 out.go:235]   - Generating certificates and keys ...
	I1205 20:25:09.017329  831680 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:25:09.017392  831680 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:25:09.223237  831680 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:25:09.361853  831680 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:25:09.465596  831680 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 20:25:09.540297  831680 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 20:25:09.675864  831680 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 20:25:09.676035  831680 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-583828 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 20:25:09.823949  831680 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 20:25:09.824094  831680 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-583828 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 20:25:10.017244  831680 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:25:10.150094  831680 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:25:10.265760  831680 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 20:25:10.265881  831680 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:25:10.506959  831680 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:25:10.613042  831680 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:25:10.773557  831680 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:25:10.858551  831680 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:25:10.944620  831680 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:25:10.945094  831680 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:25:10.947667  831680 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:25:10.949780  831680 out.go:235]   - Booting up control plane ...
	I1205 20:25:10.949918  831680 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:25:10.949992  831680 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:25:10.950556  831680 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:25:10.960097  831680 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:25:10.965544  831680 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:25:10.965610  831680 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:25:11.044081  831680 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:25:11.044265  831680 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:25:11.545686  831680 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.746842ms
	I1205 20:25:11.545775  831680 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:25:16.047211  831680 kubeadm.go:310] [api-check] The API server is healthy after 4.501501774s
	I1205 20:25:16.059306  831680 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:25:16.071792  831680 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:25:16.090776  831680 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:25:16.091066  831680 kubeadm.go:310] [mark-control-plane] Marking the node addons-583828 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:25:16.098837  831680 kubeadm.go:310] [bootstrap-token] Using token: evkn3l.jc6r2670y9dag6rg
	I1205 20:25:16.100508  831680 out.go:235]   - Configuring RBAC rules ...
	I1205 20:25:16.100623  831680 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:25:16.106685  831680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:25:16.113191  831680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:25:16.115934  831680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:25:16.118788  831680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:25:16.122418  831680 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:25:16.453935  831680 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:25:16.875309  831680 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:25:17.457277  831680 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:25:17.458489  831680 kubeadm.go:310] 
	I1205 20:25:17.458585  831680 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:25:17.458600  831680 kubeadm.go:310] 
	I1205 20:25:17.458708  831680 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:25:17.458746  831680 kubeadm.go:310] 
	I1205 20:25:17.458796  831680 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:25:17.458886  831680 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:25:17.458963  831680 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:25:17.458977  831680 kubeadm.go:310] 
	I1205 20:25:17.459073  831680 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:25:17.459110  831680 kubeadm.go:310] 
	I1205 20:25:17.459190  831680 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:25:17.459206  831680 kubeadm.go:310] 
	I1205 20:25:17.459295  831680 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:25:17.459426  831680 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:25:17.459485  831680 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:25:17.459496  831680 kubeadm.go:310] 
	I1205 20:25:17.459611  831680 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:25:17.459697  831680 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:25:17.459704  831680 kubeadm.go:310] 
	I1205 20:25:17.459793  831680 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token evkn3l.jc6r2670y9dag6rg \
	I1205 20:25:17.459915  831680 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a89d03b6be8118d89fe05341663c46b6deed4b956c25004c98e677338dc832f2 \
	I1205 20:25:17.459949  831680 kubeadm.go:310] 	--control-plane 
	I1205 20:25:17.459959  831680 kubeadm.go:310] 
	I1205 20:25:17.460073  831680 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:25:17.460082  831680 kubeadm.go:310] 
	I1205 20:25:17.460194  831680 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token evkn3l.jc6r2670y9dag6rg \
	I1205 20:25:17.460332  831680 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a89d03b6be8118d89fe05341663c46b6deed4b956c25004c98e677338dc832f2 
	I1205 20:25:17.462677  831680 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1205 20:25:17.462828  831680 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:25:17.462847  831680 cni.go:84] Creating CNI manager for ""
	I1205 20:25:17.462856  831680 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 20:25:17.464668  831680 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 20:25:17.466256  831680 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 20:25:17.470569  831680 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1205 20:25:17.470591  831680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 20:25:17.488720  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 20:25:17.690368  831680 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:25:17.690471  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:17.690509  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-583828 minikube.k8s.io/updated_at=2024_12_05T20_25_17_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=addons-583828 minikube.k8s.io/primary=true
	I1205 20:25:17.698076  831680 ops.go:34] apiserver oom_adj: -16
	I1205 20:25:17.759309  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:18.260223  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:18.759526  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:19.260215  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:19.760388  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:20.260175  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:20.760130  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:21.259724  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:21.759755  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:22.260206  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:22.329475  831680 kubeadm.go:1113] duration metric: took 4.639070164s to wait for elevateKubeSystemPrivileges
	I1205 20:25:22.329566  831680 kubeadm.go:394] duration metric: took 13.564276843s to StartCluster
	I1205 20:25:22.329599  831680 settings.go:142] acquiring lock: {Name:mk7ebf380bcfa7aba647ea9c26917767ebbabc59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:22.329747  831680 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-823623/kubeconfig
	I1205 20:25:22.330352  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/kubeconfig: {Name:mked749022ef3c102f724c73a9801abef71a2d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:22.330608  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:25:22.330624  831680 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:25:22.330701  831680 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1205 20:25:22.330875  831680 addons.go:69] Setting yakd=true in profile "addons-583828"
	I1205 20:25:22.330885  831680 config.go:182] Loaded profile config "addons-583828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:25:22.330900  831680 addons.go:234] Setting addon yakd=true in "addons-583828"
	I1205 20:25:22.330900  831680 addons.go:69] Setting ingress=true in profile "addons-583828"
	I1205 20:25:22.330916  831680 addons.go:69] Setting default-storageclass=true in profile "addons-583828"
	I1205 20:25:22.330925  831680 addons.go:234] Setting addon ingress=true in "addons-583828"
	I1205 20:25:22.330921  831680 addons.go:69] Setting gcp-auth=true in profile "addons-583828"
	I1205 20:25:22.330939  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.330941  831680 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-583828"
	I1205 20:25:22.330965  831680 mustload.go:65] Loading cluster: addons-583828
	I1205 20:25:22.330934  831680 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-583828"
	I1205 20:25:22.331287  831680 addons.go:69] Setting ingress-dns=true in profile "addons-583828"
	I1205 20:25:22.331314  831680 addons.go:234] Setting addon ingress-dns=true in "addons-583828"
	I1205 20:25:22.331343  831680 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-583828"
	I1205 20:25:22.331357  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.331386  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.330901  831680 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-583828"
	I1205 20:25:22.331583  831680 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-583828"
	I1205 20:25:22.331610  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.331620  831680 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-583828"
	I1205 20:25:22.331633  831680 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-583828"
	I1205 20:25:22.331653  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.331938  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.332193  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.332287  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.332456  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.332863  831680 addons.go:69] Setting registry=true in profile "addons-583828"
	I1205 20:25:22.332884  831680 addons.go:234] Setting addon registry=true in "addons-583828"
	I1205 20:25:22.332938  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.333496  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.333838  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.333964  831680 addons.go:69] Setting storage-provisioner=true in profile "addons-583828"
	I1205 20:25:22.333988  831680 addons.go:234] Setting addon storage-provisioner=true in "addons-583828"
	I1205 20:25:22.334014  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.334275  831680 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-583828"
	I1205 20:25:22.334343  831680 out.go:177] * Verifying Kubernetes components...
	I1205 20:25:22.334506  831680 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-583828"
	I1205 20:25:22.334654  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.334724  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.334947  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.330887  831680 addons.go:69] Setting metrics-server=true in profile "addons-583828"
	I1205 20:25:22.336071  831680 addons.go:234] Setting addon metrics-server=true in "addons-583828"
	I1205 20:25:22.336116  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.336521  831680 addons.go:69] Setting volcano=true in profile "addons-583828"
	I1205 20:25:22.336591  831680 addons.go:234] Setting addon volcano=true in "addons-583828"
	I1205 20:25:22.336686  831680 addons.go:69] Setting volumesnapshots=true in profile "addons-583828"
	I1205 20:25:22.336722  831680 addons.go:234] Setting addon volumesnapshots=true in "addons-583828"
	I1205 20:25:22.336758  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.336819  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.337038  831680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:25:22.337964  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.334372  831680 addons.go:69] Setting inspektor-gadget=true in profile "addons-583828"
	I1205 20:25:22.340119  831680 addons.go:234] Setting addon inspektor-gadget=true in "addons-583828"
	I1205 20:25:22.340211  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.338915  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.330886  831680 addons.go:69] Setting cloud-spanner=true in profile "addons-583828"
	I1205 20:25:22.331550  831680 config.go:182] Loaded profile config "addons-583828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:25:22.338680  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.342609  831680 addons.go:234] Setting addon cloud-spanner=true in "addons-583828"
	I1205 20:25:22.343355  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.368005  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.372557  831680 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1205 20:25:22.373307  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.373887  831680 addons.go:234] Setting addon default-storageclass=true in "addons-583828"
	I1205 20:25:22.373937  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.373938  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.374409  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.374615  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.376559  831680 out.go:177]   - Using image docker.io/registry:2.8.3
	I1205 20:25:22.376679  831680 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1205 20:25:22.377599  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.377905  831680 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 20:25:22.377928  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1205 20:25:22.377986  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.378256  831680 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 20:25:22.378278  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1205 20:25:22.378332  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.392260  831680 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1205 20:25:22.393699  831680 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 20:25:22.393725  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 20:25:22.393799  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.417653  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.436725  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.440268  831680 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:25:22.440293  831680 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:25:22.440354  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.442811  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 20:25:22.445394  831680 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:25:22.447327  831680 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:25:22.447351  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:25:22.447412  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.447606  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 20:25:22.447715  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 20:25:22.449700  831680 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 20:25:22.449723  831680 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 20:25:22.449787  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.449956  831680 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1205 20:25:22.451553  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 20:25:22.451619  831680 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:25:22.451629  831680 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:25:22.451677  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.451885  831680 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1205 20:25:22.453094  831680 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1205 20:25:22.453112  831680 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1205 20:25:22.453165  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.454720  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.454971  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 20:25:22.457355  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W1205 20:25:22.458718  831680 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1205 20:25:22.461080  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 20:25:22.462312  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 20:25:22.463565  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 20:25:22.464658  831680 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 20:25:22.464681  831680 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 20:25:22.464744  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.464791  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.467002  831680 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-583828"
	I1205 20:25:22.467090  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.467092  831680 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1205 20:25:22.467593  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.469414  831680 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 20:25:22.470513  831680 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 20:25:22.471659  831680 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1205 20:25:22.471923  831680 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 20:25:22.471951  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1205 20:25:22.472012  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.473503  831680 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1205 20:25:22.473523  831680 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1205 20:25:22.473580  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.474734  831680 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1205 20:25:22.475986  831680 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1205 20:25:22.476006  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 20:25:22.476062  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.481755  831680 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1205 20:25:22.483577  831680 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 20:25:22.483598  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1205 20:25:22.483653  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.485260  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.486800  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.492190  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.492402  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.494743  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.501962  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.525126  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.525533  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.527448  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.532031  831680 out.go:177]   - Using image docker.io/busybox:stable
	I1205 20:25:22.533051  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	W1205 20:25:22.533414  831680 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 20:25:22.533462  831680 retry.go:31] will retry after 340.327697ms: ssh: handshake failed: EOF
	W1205 20:25:22.534116  831680 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 20:25:22.534135  831680 retry.go:31] will retry after 151.300109ms: ssh: handshake failed: EOF
	I1205 20:25:22.540202  831680 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 20:25:22.544977  831680 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 20:25:22.545002  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 20:25:22.545076  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.562214  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.724623  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:25:22.739461  831680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:25:22.814190  831680 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 20:25:22.814221  831680 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 20:25:22.928623  831680 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 20:25:22.928711  831680 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 20:25:22.930051  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 20:25:22.933216  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:25:23.010835  831680 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1205 20:25:23.010882  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1205 20:25:23.013096  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 20:25:23.018375  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 20:25:23.024774  831680 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 20:25:23.024866  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 20:25:23.025931  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 20:25:23.032078  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:25:23.110859  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 20:25:23.128522  831680 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:25:23.128617  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 20:25:23.130606  831680 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 20:25:23.130693  831680 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 20:25:23.211800  831680 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1205 20:25:23.211912  831680 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1205 20:25:23.220578  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1205 20:25:23.225024  831680 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 20:25:23.225053  831680 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 20:25:23.310870  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 20:25:23.411478  831680 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:25:23.411574  831680 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:25:23.511552  831680 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 20:25:23.511657  831680 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 20:25:23.525408  831680 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 20:25:23.525507  831680 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 20:25:23.623090  831680 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1205 20:25:23.623194  831680 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1205 20:25:23.629856  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 20:25:23.822790  831680 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:25:23.822890  831680 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:25:23.918481  831680 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 20:25:23.918574  831680 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 20:25:24.010673  831680 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 20:25:24.010776  831680 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 20:25:24.016755  831680 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1205 20:25:24.016845  831680 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1205 20:25:24.213089  831680 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1205 20:25:24.213116  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1205 20:25:24.309612  831680 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 20:25:24.309724  831680 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 20:25:24.312784  831680 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 20:25:24.312865  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 20:25:24.411598  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:25:24.428582  831680 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.703907598s)
	I1205 20:25:24.428805  831680 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1205 20:25:24.428729  831680 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.6892292s)
	I1205 20:25:24.430063  831680 node_ready.go:35] waiting up to 6m0s for node "addons-583828" to be "Ready" ...
	I1205 20:25:24.523373  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 20:25:24.613162  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1205 20:25:24.722691  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.792481903s)
	I1205 20:25:25.111937  831680 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 20:25:25.112035  831680 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 20:25:25.427796  831680 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-583828" context rescaled to 1 replicas
	I1205 20:25:25.622175  831680 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 20:25:25.622209  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 20:25:25.911032  831680 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 20:25:25.911067  831680 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 20:25:26.031130  831680 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 20:25:26.031171  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 20:25:26.219631  831680 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 20:25:26.219659  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 20:25:26.410549  831680 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 20:25:26.410585  831680 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 20:25:26.518211  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 20:25:26.829982  831680 node_ready.go:53] node "addons-583828" has status "Ready":"False"
	I1205 20:25:27.331347  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.398084856s)
	I1205 20:25:27.331427  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.318304401s)
	I1205 20:25:27.331474  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.313012721s)
	I1205 20:25:27.622940  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.596922141s)
	I1205 20:25:27.623228  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.59111365s)
	W1205 20:25:27.929470  831680 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1205 20:25:29.019506  831680 node_ready.go:53] node "addons-583828" has status "Ready":"False"
	I1205 20:25:29.135391  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.024401389s)
	I1205 20:25:29.135459  831680 addons.go:475] Verifying addon ingress=true in "addons-583828"
	I1205 20:25:29.135474  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.914860228s)
	I1205 20:25:29.135561  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.824597205s)
	I1205 20:25:29.135581  831680 addons.go:475] Verifying addon registry=true in "addons-583828"
	I1205 20:25:29.135784  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.505880051s)
	I1205 20:25:29.135897  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.724207446s)
	I1205 20:25:29.135924  831680 addons.go:475] Verifying addon metrics-server=true in "addons-583828"
	I1205 20:25:29.137893  831680 out.go:177] * Verifying ingress addon...
	I1205 20:25:29.137908  831680 out.go:177] * Verifying registry addon...
	I1205 20:25:29.139928  831680 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 20:25:29.139991  831680 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 20:25:29.216130  831680 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 20:25:29.216165  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:29.216389  831680 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 20:25:29.216404  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:29.644151  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:29.644750  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:29.710911  831680 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 20:25:29.710997  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:29.739321  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:30.030231  831680 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 20:25:30.037992  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.424734986s)
	I1205 20:25:30.037911  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.514424126s)
	W1205 20:25:30.038366  831680 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 20:25:30.038428  831680 retry.go:31] will retry after 174.297587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 20:25:30.039846  831680 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-583828 service yakd-dashboard -n yakd-dashboard
	
	I1205 20:25:30.114006  831680 addons.go:234] Setting addon gcp-auth=true in "addons-583828"
	I1205 20:25:30.114141  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:30.114735  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:30.143059  831680 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 20:25:30.143126  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:30.147500  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:30.162360  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:30.212957  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 20:25:30.248317  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:30.643673  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:30.644338  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:30.832541  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.31420902s)
	I1205 20:25:30.832599  831680 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-583828"
	I1205 20:25:30.834999  831680 out.go:177] * Verifying csi-hostpath-driver addon...
	I1205 20:25:30.837265  831680 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 20:25:30.840556  831680 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 20:25:30.840577  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:31.144051  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:31.144561  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:31.341266  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:31.433896  831680 node_ready.go:53] node "addons-583828" has status "Ready":"False"
	I1205 20:25:31.643983  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:31.644409  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:31.841441  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:32.144316  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:32.144738  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:32.341988  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:32.644171  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:32.644627  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:32.841595  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:33.145141  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:33.145738  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:33.157759  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.944744053s)
	I1205 20:25:33.157852  831680 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.014753329s)
	I1205 20:25:33.159525  831680 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 20:25:33.160909  831680 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1205 20:25:33.162076  831680 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 20:25:33.162094  831680 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 20:25:33.179534  831680 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 20:25:33.179563  831680 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 20:25:33.196523  831680 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 20:25:33.196552  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1205 20:25:33.214223  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 20:25:33.341016  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:33.434009  831680 node_ready.go:53] node "addons-583828" has status "Ready":"False"
	I1205 20:25:33.559470  831680 addons.go:475] Verifying addon gcp-auth=true in "addons-583828"
	I1205 20:25:33.562054  831680 out.go:177] * Verifying gcp-auth addon...
	I1205 20:25:33.564163  831680 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 20:25:33.566974  831680 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 20:25:33.566995  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:33.643912  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:33.644292  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:33.841765  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:34.067430  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:34.143565  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:34.144149  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:34.340944  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:34.567805  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:34.643865  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:34.644462  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:34.841350  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:35.068239  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:35.143261  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:35.143983  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:35.341731  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:35.567410  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:35.643835  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:35.644285  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:35.841448  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:35.934307  831680 node_ready.go:53] node "addons-583828" has status "Ready":"False"
	I1205 20:25:36.068196  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:36.143504  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:36.143861  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:36.341163  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:36.567984  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:36.644358  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:36.644679  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:36.841432  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:37.067743  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:37.143800  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:37.144281  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:37.340979  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:37.568316  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:37.643165  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:37.643811  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:37.841792  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:38.067399  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:38.143309  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:38.143838  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:38.341368  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:38.434424  831680 node_ready.go:53] node "addons-583828" has status "Ready":"False"
	I1205 20:25:38.567404  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:38.643153  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:38.643631  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:38.841510  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:39.067651  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:39.143773  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:39.144045  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:39.340916  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:39.567486  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:39.643470  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:39.643890  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:39.841554  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:40.068083  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:40.143952  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:40.144365  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:40.341324  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:40.567939  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:40.644102  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:40.644540  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:40.841696  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:40.933420  831680 node_ready.go:53] node "addons-583828" has status "Ready":"False"
	I1205 20:25:41.067279  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:41.143350  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:41.143742  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:41.340599  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:41.625269  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:41.719806  831680 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 20:25:41.719901  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:41.719929  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:41.841368  831680 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 20:25:41.841402  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:41.933359  831680 node_ready.go:49] node "addons-583828" has status "Ready":"True"
	I1205 20:25:41.933390  831680 node_ready.go:38] duration metric: took 17.503244864s for node "addons-583828" to be "Ready" ...
	I1205 20:25:41.933403  831680 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:25:41.946120  831680 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace to be "Ready" ...
	I1205 20:25:42.113859  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:42.214292  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:42.214957  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:42.343363  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:42.568274  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:42.669229  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:42.669803  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:42.842270  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:43.068666  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:43.144337  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:43.144394  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:43.342422  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:43.567639  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:43.668747  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:43.668889  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:43.842238  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:43.952481  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:25:44.112559  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:44.145621  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:44.145881  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:44.342782  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:44.568502  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:44.669525  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:44.669771  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:44.842200  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:45.068191  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:45.169353  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:45.169636  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:45.342419  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:45.568547  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:45.643973  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:45.644207  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:45.842113  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:46.068530  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:46.144106  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:46.144306  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:46.342024  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:46.451530  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:25:46.567801  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:46.644236  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:46.644753  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:46.842867  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:47.067982  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:47.145513  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:47.147113  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:47.341945  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:47.568131  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:47.644336  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:47.644551  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:47.842439  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:48.068933  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:48.144128  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:48.144300  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:48.341751  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:48.452409  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:25:48.567712  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:48.643942  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:48.644423  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:48.842641  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:49.068328  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:49.144708  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:49.145082  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:49.342497  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:49.568332  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:49.644557  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:49.645178  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:49.842003  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:50.068346  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:50.143835  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:50.143936  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:50.342803  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:50.452530  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:25:50.567920  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:50.644370  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:50.644767  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:50.844853  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:51.068594  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:51.144223  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:51.144490  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:51.342198  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:51.568389  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:51.645164  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:51.645264  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:51.842147  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:52.068323  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:52.143360  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:52.143536  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:52.342145  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:52.567677  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:52.643925  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:52.644305  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:52.842466  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:52.953182  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:25:53.068922  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:53.144459  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:53.144841  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:53.342023  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:53.612428  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:53.645320  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:53.645589  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:53.843419  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:54.068851  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:54.145383  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:54.145713  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:54.342403  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:54.567896  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:54.644225  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:54.644428  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:54.842469  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:55.067854  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:55.143979  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:55.144603  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:55.342346  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:55.452996  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:25:55.568521  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:55.644121  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:55.644304  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:55.842431  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:56.068595  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:56.144838  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:56.145049  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:56.341725  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:56.612171  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:56.644390  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:56.644779  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:56.842880  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:57.112331  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:57.144916  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:57.144948  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:57.342823  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:57.568554  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:57.647051  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:57.647057  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:57.842689  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:57.952122  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:25:58.068068  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:58.144638  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:58.144954  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:58.344779  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:58.568740  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:58.644474  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:58.644878  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:58.842107  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:59.068382  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:59.143684  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:59.143904  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:59.342517  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:59.627532  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:59.714723  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:59.715964  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:59.915191  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:00.016328  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:00.113137  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:00.144522  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:00.145566  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:00.341816  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:00.612428  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:00.712712  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:00.713134  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:00.842415  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:01.068099  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:01.144475  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:01.145381  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:01.341484  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:01.568110  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:01.644751  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:01.645233  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:01.850808  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:02.067732  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:02.144459  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:02.144756  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:02.342467  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:02.452353  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:02.568746  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:02.643912  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:02.644253  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:02.843614  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:03.068637  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:03.144146  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:03.144327  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:03.341648  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:03.568126  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:03.644506  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:03.645092  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:03.842469  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:04.067663  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:04.143707  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:04.143983  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:04.342127  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:04.453269  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:04.568502  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:04.644035  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:04.644827  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:04.842905  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:05.067680  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:05.143826  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:05.144538  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:05.342476  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:05.568312  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:05.643562  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:05.643908  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:05.842766  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:06.068822  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:06.144139  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:06.144406  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:06.342167  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:06.568985  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:06.669581  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:06.669704  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:06.842543  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:06.952319  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:07.068613  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:07.143753  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:07.143954  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:07.341547  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:07.567785  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:07.644108  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:07.644396  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:07.842212  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:08.068918  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:08.169777  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:08.169950  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:08.342807  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:08.567588  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:08.644257  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:08.644390  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:08.842435  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:08.952872  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:09.113717  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:09.145447  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:09.145705  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:09.414310  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:09.618528  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:09.714191  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:09.717180  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:09.915006  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:10.113296  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:10.215179  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:10.217891  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:10.415255  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:10.612209  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:10.714127  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:10.714410  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:10.912738  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:11.012444  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:11.112262  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:11.144981  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:11.145514  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:11.342806  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:11.568185  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:11.644795  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:11.645323  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:11.843612  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:12.068257  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:12.148681  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:12.248447  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:12.342943  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:12.567983  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:12.644320  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:12.644462  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:12.842836  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:13.068417  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:13.143631  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:13.144862  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:13.342288  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:13.452231  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:13.568370  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:13.643837  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:13.644005  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:13.843178  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:14.068400  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:14.145141  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:14.146419  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:14.342156  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:14.568443  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:14.644069  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:14.644459  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:14.841659  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:15.068746  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:15.144602  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:15.145204  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:15.342281  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:15.567793  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:15.644371  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:15.644810  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:15.841868  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:15.953053  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:16.068332  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:16.144694  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:16.144929  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:16.343194  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:16.568614  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:16.643890  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:16.644292  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:16.842523  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:17.068778  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:17.144511  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:17.145098  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:17.342373  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:17.568701  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:17.646499  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:17.646753  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:17.842318  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:18.068288  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:18.144844  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:18.145410  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:18.342208  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:18.452073  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:18.568173  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:18.644623  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:18.644774  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:18.842673  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:19.068838  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:19.169834  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:19.169992  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:19.341704  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:19.568409  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:19.644133  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:19.644451  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:19.842842  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:20.111630  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:20.144418  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:20.144794  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:20.342324  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:20.455555  831680 pod_ready.go:93] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:20.455586  831680 pod_ready.go:82] duration metric: took 38.509422639s for pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.455610  831680 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dkkxw" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.463291  831680 pod_ready.go:93] pod "coredns-7c65d6cfc9-dkkxw" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:20.463321  831680 pod_ready.go:82] duration metric: took 7.702515ms for pod "coredns-7c65d6cfc9-dkkxw" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.463356  831680 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.468461  831680 pod_ready.go:93] pod "etcd-addons-583828" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:20.468483  831680 pod_ready.go:82] duration metric: took 5.119928ms for pod "etcd-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.468494  831680 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.512489  831680 pod_ready.go:93] pod "kube-apiserver-addons-583828" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:20.512517  831680 pod_ready.go:82] duration metric: took 44.016979ms for pod "kube-apiserver-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.512528  831680 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.518277  831680 pod_ready.go:93] pod "kube-controller-manager-addons-583828" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:20.518299  831680 pod_ready.go:82] duration metric: took 5.764644ms for pod "kube-controller-manager-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.518311  831680 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7b2sh" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.568020  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:20.644874  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:20.645213  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:20.842543  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:20.850610  831680 pod_ready.go:93] pod "kube-proxy-7b2sh" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:20.850639  831680 pod_ready.go:82] duration metric: took 332.319507ms for pod "kube-proxy-7b2sh" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.850652  831680 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:21.067950  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:21.168863  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:21.211766  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:21.249983  831680 pod_ready.go:93] pod "kube-scheduler-addons-583828" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:21.250009  831680 pod_ready.go:82] duration metric: took 399.349463ms for pod "kube-scheduler-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:21.250020  831680 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:21.342354  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:21.567655  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:21.643955  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:21.644547  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:21.842853  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:22.067419  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:22.146612  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:22.146898  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:22.342865  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:22.612465  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:22.711823  831680 kapi.go:107] duration metric: took 53.571886889s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 20:26:22.712583  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:22.913143  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:23.111690  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:23.144284  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:23.256276  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:23.342607  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:23.567674  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:23.644752  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:23.842333  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:24.068276  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:24.144633  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:24.343406  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:24.567636  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:24.644225  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:24.842634  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:25.068510  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:25.144079  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:25.260254  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:25.341797  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:25.612530  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:25.645554  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:25.842776  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:26.067849  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:26.168938  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:26.342582  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:26.567368  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:26.645036  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:26.843227  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:27.068558  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:27.144790  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:27.343212  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:27.568978  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:27.670312  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:27.755911  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:27.841365  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:28.114464  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:28.213110  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:28.415357  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:28.631516  831680 kapi.go:107] duration metric: took 55.067345784s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 20:26:28.633464  831680 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-583828 cluster.
	I1205 20:26:28.710886  831680 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 20:26:28.712350  831680 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 20:26:28.721500  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:28.843087  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:29.213474  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:29.412778  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:29.713886  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:29.816626  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:29.915291  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:30.144565  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:30.342700  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:30.645065  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:30.841860  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:31.144835  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:31.342617  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:31.644149  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:31.845324  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:32.144312  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:32.256957  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:32.343040  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:32.644280  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:32.842223  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:33.144804  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:33.342685  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:33.646249  831680 kapi.go:107] duration metric: took 1m4.506247962s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 20:26:33.841825  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:34.342284  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:34.756232  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:34.842840  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:35.342257  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:35.904425  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:36.342832  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:36.842595  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:37.256168  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:37.342423  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:37.842295  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:38.342737  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:38.841837  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:39.256600  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:39.342153  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:39.842972  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:40.342781  831680 kapi.go:107] duration metric: took 1m9.505513818s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 20:26:40.344653  831680 out.go:177] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, inspektor-gadget, cloud-spanner, metrics-server, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1205 20:26:40.346074  831680 addons.go:510] duration metric: took 1m18.01538325s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner nvidia-device-plugin ingress-dns storage-provisioner-rancher inspektor-gadget cloud-spanner metrics-server yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1205 20:26:41.757060  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:44.256439  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:46.813376  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:49.255762  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:51.756546  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:54.256613  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:56.256669  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:58.757170  831680 pod_ready.go:93] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:58.757270  831680 pod_ready.go:82] duration metric: took 37.507238319s for pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:58.757298  831680 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5zspz" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:58.767046  831680 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-5zspz" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:58.767079  831680 pod_ready.go:82] duration metric: took 9.767421ms for pod "nvidia-device-plugin-daemonset-5zspz" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:58.767108  831680 pod_ready.go:39] duration metric: took 1m16.833690429s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:26:58.767135  831680 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:26:58.767180  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:26:58.767246  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:26:58.803654  831680 cri.go:89] found id: "98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889"
	I1205 20:26:58.803684  831680 cri.go:89] found id: ""
	I1205 20:26:58.803693  831680 logs.go:282] 1 containers: [98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889]
	I1205 20:26:58.803744  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:26:58.807187  831680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:26:58.807275  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:26:58.842029  831680 cri.go:89] found id: "feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201"
	I1205 20:26:58.842051  831680 cri.go:89] found id: ""
	I1205 20:26:58.842060  831680 logs.go:282] 1 containers: [feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201]
	I1205 20:26:58.842106  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:26:58.845891  831680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:26:58.845954  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:26:58.881333  831680 cri.go:89] found id: "978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629"
	I1205 20:26:58.881361  831680 cri.go:89] found id: ""
	I1205 20:26:58.881372  831680 logs.go:282] 1 containers: [978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629]
	I1205 20:26:58.881423  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:26:58.885224  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:26:58.885298  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:26:58.920628  831680 cri.go:89] found id: "c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01"
	I1205 20:26:58.920649  831680 cri.go:89] found id: ""
	I1205 20:26:58.920657  831680 logs.go:282] 1 containers: [c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01]
	I1205 20:26:58.920703  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:26:58.924275  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:26:58.924343  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:26:58.958807  831680 cri.go:89] found id: "42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8"
	I1205 20:26:58.958828  831680 cri.go:89] found id: ""
	I1205 20:26:58.958836  831680 logs.go:282] 1 containers: [42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8]
	I1205 20:26:58.958881  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:26:58.962504  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:26:58.962576  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:26:58.998901  831680 cri.go:89] found id: "554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578"
	I1205 20:26:58.998930  831680 cri.go:89] found id: ""
	I1205 20:26:58.998939  831680 logs.go:282] 1 containers: [554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578]
	I1205 20:26:58.998997  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:26:59.002419  831680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:26:59.002479  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:26:59.036667  831680 cri.go:89] found id: "ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c"
	I1205 20:26:59.036697  831680 cri.go:89] found id: ""
	I1205 20:26:59.036708  831680 logs.go:282] 1 containers: [ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c]
	I1205 20:26:59.036752  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:26:59.040283  831680 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:26:59.040315  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:26:59.120466  831680 logs.go:123] Gathering logs for container status ...
	I1205 20:26:59.120517  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:26:59.164079  831680 logs.go:123] Gathering logs for dmesg ...
	I1205 20:26:59.164112  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:26:59.191118  831680 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:26:59.191160  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:26:59.296797  831680 logs.go:123] Gathering logs for etcd [feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201] ...
	I1205 20:26:59.296833  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201"
	I1205 20:26:59.355024  831680 logs.go:123] Gathering logs for coredns [978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629] ...
	I1205 20:26:59.355082  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629"
	I1205 20:26:59.392871  831680 logs.go:123] Gathering logs for kube-proxy [42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8] ...
	I1205 20:26:59.392925  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8"
	I1205 20:26:59.430387  831680 logs.go:123] Gathering logs for kubelet ...
	I1205 20:26:59.430421  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:26:59.524394  831680 logs.go:123] Gathering logs for kube-apiserver [98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889] ...
	I1205 20:26:59.524435  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889"
	I1205 20:26:59.571513  831680 logs.go:123] Gathering logs for kube-scheduler [c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01] ...
	I1205 20:26:59.571549  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01"
	I1205 20:26:59.611369  831680 logs.go:123] Gathering logs for kube-controller-manager [554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578] ...
	I1205 20:26:59.611406  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578"
	I1205 20:26:59.669885  831680 logs.go:123] Gathering logs for kindnet [ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c] ...
	I1205 20:26:59.669929  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c"
	I1205 20:27:02.206306  831680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:27:02.221639  831680 api_server.go:72] duration metric: took 1m39.890978267s to wait for apiserver process to appear ...
	I1205 20:27:02.221673  831680 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:27:02.221727  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:27:02.221782  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:27:02.258378  831680 cri.go:89] found id: "98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889"
	I1205 20:27:02.258408  831680 cri.go:89] found id: ""
	I1205 20:27:02.258416  831680 logs.go:282] 1 containers: [98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889]
	I1205 20:27:02.258464  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:02.262228  831680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:27:02.262301  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:27:02.297341  831680 cri.go:89] found id: "feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201"
	I1205 20:27:02.297377  831680 cri.go:89] found id: ""
	I1205 20:27:02.297388  831680 logs.go:282] 1 containers: [feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201]
	I1205 20:27:02.297443  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:02.301020  831680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:27:02.301087  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:27:02.337844  831680 cri.go:89] found id: "978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629"
	I1205 20:27:02.337890  831680 cri.go:89] found id: ""
	I1205 20:27:02.337901  831680 logs.go:282] 1 containers: [978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629]
	I1205 20:27:02.337959  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:02.341911  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:27:02.342003  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:27:02.377648  831680 cri.go:89] found id: "c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01"
	I1205 20:27:02.377670  831680 cri.go:89] found id: ""
	I1205 20:27:02.377678  831680 logs.go:282] 1 containers: [c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01]
	I1205 20:27:02.377723  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:02.381391  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:27:02.381465  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:27:02.417806  831680 cri.go:89] found id: "42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8"
	I1205 20:27:02.417833  831680 cri.go:89] found id: ""
	I1205 20:27:02.417845  831680 logs.go:282] 1 containers: [42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8]
	I1205 20:27:02.417893  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:02.421889  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:27:02.421962  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:27:02.458138  831680 cri.go:89] found id: "554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578"
	I1205 20:27:02.458166  831680 cri.go:89] found id: ""
	I1205 20:27:02.458177  831680 logs.go:282] 1 containers: [554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578]
	I1205 20:27:02.458235  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:02.462096  831680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:27:02.462154  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:27:02.497047  831680 cri.go:89] found id: "ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c"
	I1205 20:27:02.497075  831680 cri.go:89] found id: ""
	I1205 20:27:02.497083  831680 logs.go:282] 1 containers: [ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c]
	I1205 20:27:02.497129  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:02.500737  831680 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:27:02.500764  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:27:02.601676  831680 logs.go:123] Gathering logs for kube-apiserver [98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889] ...
	I1205 20:27:02.601706  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889"
	I1205 20:27:02.648766  831680 logs.go:123] Gathering logs for etcd [feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201] ...
	I1205 20:27:02.648806  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201"
	I1205 20:27:02.701070  831680 logs.go:123] Gathering logs for coredns [978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629] ...
	I1205 20:27:02.701117  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629"
	I1205 20:27:02.739322  831680 logs.go:123] Gathering logs for kube-scheduler [c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01] ...
	I1205 20:27:02.739373  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01"
	I1205 20:27:02.781198  831680 logs.go:123] Gathering logs for kube-controller-manager [554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578] ...
	I1205 20:27:02.781234  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578"
	I1205 20:27:02.839513  831680 logs.go:123] Gathering logs for kubelet ...
	I1205 20:27:02.839552  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:27:02.927070  831680 logs.go:123] Gathering logs for dmesg ...
	I1205 20:27:02.927112  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:27:02.955779  831680 logs.go:123] Gathering logs for kube-proxy [42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8] ...
	I1205 20:27:02.955818  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8"
	I1205 20:27:02.990872  831680 logs.go:123] Gathering logs for kindnet [ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c] ...
	I1205 20:27:02.990913  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c"
	I1205 20:27:03.026581  831680 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:27:03.026611  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:27:03.102087  831680 logs.go:123] Gathering logs for container status ...
	I1205 20:27:03.102129  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:27:05.648657  831680 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 20:27:05.652650  831680 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1205 20:27:05.653582  831680 api_server.go:141] control plane version: v1.31.2
	I1205 20:27:05.653607  831680 api_server.go:131] duration metric: took 3.431927171s to wait for apiserver health ...
	I1205 20:27:05.653622  831680 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:27:05.653646  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:27:05.653697  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:27:05.689375  831680 cri.go:89] found id: "98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889"
	I1205 20:27:05.689403  831680 cri.go:89] found id: ""
	I1205 20:27:05.689415  831680 logs.go:282] 1 containers: [98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889]
	I1205 20:27:05.689468  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:05.693022  831680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:27:05.693107  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:27:05.728582  831680 cri.go:89] found id: "feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201"
	I1205 20:27:05.728612  831680 cri.go:89] found id: ""
	I1205 20:27:05.728623  831680 logs.go:282] 1 containers: [feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201]
	I1205 20:27:05.728695  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:05.732551  831680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:27:05.732634  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:27:05.768297  831680 cri.go:89] found id: "978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629"
	I1205 20:27:05.768324  831680 cri.go:89] found id: ""
	I1205 20:27:05.768332  831680 logs.go:282] 1 containers: [978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629]
	I1205 20:27:05.768391  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:05.772092  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:27:05.772155  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:27:05.807176  831680 cri.go:89] found id: "c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01"
	I1205 20:27:05.807199  831680 cri.go:89] found id: ""
	I1205 20:27:05.807206  831680 logs.go:282] 1 containers: [c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01]
	I1205 20:27:05.807261  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:05.810977  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:27:05.811040  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:27:05.848206  831680 cri.go:89] found id: "42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8"
	I1205 20:27:05.848244  831680 cri.go:89] found id: ""
	I1205 20:27:05.848257  831680 logs.go:282] 1 containers: [42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8]
	I1205 20:27:05.848309  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:05.852151  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:27:05.852232  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:27:05.890010  831680 cri.go:89] found id: "554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578"
	I1205 20:27:05.890035  831680 cri.go:89] found id: ""
	I1205 20:27:05.890043  831680 logs.go:282] 1 containers: [554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578]
	I1205 20:27:05.890100  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:05.893706  831680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:27:05.893763  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:27:05.928421  831680 cri.go:89] found id: "ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c"
	I1205 20:27:05.928449  831680 cri.go:89] found id: ""
	I1205 20:27:05.928458  831680 logs.go:282] 1 containers: [ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c]
	I1205 20:27:05.928515  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:05.932122  831680 logs.go:123] Gathering logs for kubelet ...
	I1205 20:27:05.932148  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:27:06.019265  831680 logs.go:123] Gathering logs for etcd [feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201] ...
	I1205 20:27:06.019312  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201"
	I1205 20:27:06.072058  831680 logs.go:123] Gathering logs for coredns [978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629] ...
	I1205 20:27:06.072107  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629"
	I1205 20:27:06.110337  831680 logs.go:123] Gathering logs for kube-proxy [42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8] ...
	I1205 20:27:06.110372  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8"
	I1205 20:27:06.145985  831680 logs.go:123] Gathering logs for kube-controller-manager [554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578] ...
	I1205 20:27:06.146020  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578"
	I1205 20:27:06.205238  831680 logs.go:123] Gathering logs for kindnet [ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c] ...
	I1205 20:27:06.205281  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c"
	I1205 20:27:06.241473  831680 logs.go:123] Gathering logs for dmesg ...
	I1205 20:27:06.241502  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:27:06.269057  831680 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:27:06.269099  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:27:06.378903  831680 logs.go:123] Gathering logs for kube-apiserver [98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889] ...
	I1205 20:27:06.378938  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889"
	I1205 20:27:06.426943  831680 logs.go:123] Gathering logs for kube-scheduler [c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01] ...
	I1205 20:27:06.426985  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01"
	I1205 20:27:06.469419  831680 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:27:06.469465  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:27:06.554104  831680 logs.go:123] Gathering logs for container status ...
	I1205 20:27:06.554155  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:27:09.109489  831680 system_pods.go:59] 19 kube-system pods found
	I1205 20:27:09.109536  831680 system_pods.go:61] "amd-gpu-device-plugin-rc729" [c2c85683-d2fe-4fe5-bee0-cb72305ef72e] Running
	I1205 20:27:09.109543  831680 system_pods.go:61] "coredns-7c65d6cfc9-dkkxw" [ab688262-31c0-4d73-84f9-79988d76bb32] Running
	I1205 20:27:09.109547  831680 system_pods.go:61] "csi-hostpath-attacher-0" [5d14e0fd-b0e0-467f-b1cb-d8385382d57e] Running
	I1205 20:27:09.109550  831680 system_pods.go:61] "csi-hostpath-resizer-0" [e9117e43-09b3-4a31-8336-6610a83137be] Running
	I1205 20:27:09.109556  831680 system_pods.go:61] "csi-hostpathplugin-xjjqm" [e76e7df4-19a0-4da7-959e-77806daa2ad0] Running
	I1205 20:27:09.109561  831680 system_pods.go:61] "etcd-addons-583828" [0e09f289-f6cc-4d00-8613-be519b92139f] Running
	I1205 20:27:09.109565  831680 system_pods.go:61] "kindnet-dfgk2" [853b95db-fec0-426a-809a-05c807358dfa] Running
	I1205 20:27:09.109568  831680 system_pods.go:61] "kube-apiserver-addons-583828" [3efa3769-d977-4896-922f-f11b696b2661] Running
	I1205 20:27:09.109571  831680 system_pods.go:61] "kube-controller-manager-addons-583828" [c763df0e-ccca-4c39-bf2f-a7e3393f34db] Running
	I1205 20:27:09.109575  831680 system_pods.go:61] "kube-ingress-dns-minikube" [7fdb2265-3f78-4fd7-9f95-2ee7d4361c8c] Running
	I1205 20:27:09.109578  831680 system_pods.go:61] "kube-proxy-7b2sh" [80fbfc76-9441-46fa-b36f-0b4c43010444] Running
	I1205 20:27:09.109581  831680 system_pods.go:61] "kube-scheduler-addons-583828" [5c1ad2e6-957a-4098-b3b6-efe050ca5709] Running
	I1205 20:27:09.109584  831680 system_pods.go:61] "metrics-server-84c5f94fbc-lc9cp" [30aaf999-d2c9-45af-b24e-e74e1c57353b] Running
	I1205 20:27:09.109588  831680 system_pods.go:61] "nvidia-device-plugin-daemonset-5zspz" [640da076-aa23-44e4-8e0d-03530daed62f] Running
	I1205 20:27:09.109591  831680 system_pods.go:61] "registry-66c9cd494c-z49gz" [fe21bb58-8336-4e34-b5f4-ad786e9a2fac] Running
	I1205 20:27:09.109594  831680 system_pods.go:61] "registry-proxy-fzjzn" [6dd2b29c-df34-4531-be7e-32c564376c8d] Running
	I1205 20:27:09.109597  831680 system_pods.go:61] "snapshot-controller-56fcc65765-9xqwt" [56140c8a-3229-4005-b2ff-25c148dd6e76] Running
	I1205 20:27:09.109600  831680 system_pods.go:61] "snapshot-controller-56fcc65765-wwprs" [cc942f85-fc68-4c97-a27b-fc783a1ae47c] Running
	I1205 20:27:09.109604  831680 system_pods.go:61] "storage-provisioner" [bc98964a-3b9e-4e28-8503-ef8578884db4] Running
	I1205 20:27:09.109610  831680 system_pods.go:74] duration metric: took 3.455983098s to wait for pod list to return data ...
	I1205 20:27:09.109622  831680 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:27:09.112252  831680 default_sa.go:45] found service account: "default"
	I1205 20:27:09.112277  831680 default_sa.go:55] duration metric: took 2.64869ms for default service account to be created ...
	I1205 20:27:09.112285  831680 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:27:09.121090  831680 system_pods.go:86] 19 kube-system pods found
	I1205 20:27:09.121119  831680 system_pods.go:89] "amd-gpu-device-plugin-rc729" [c2c85683-d2fe-4fe5-bee0-cb72305ef72e] Running
	I1205 20:27:09.121125  831680 system_pods.go:89] "coredns-7c65d6cfc9-dkkxw" [ab688262-31c0-4d73-84f9-79988d76bb32] Running
	I1205 20:27:09.121129  831680 system_pods.go:89] "csi-hostpath-attacher-0" [5d14e0fd-b0e0-467f-b1cb-d8385382d57e] Running
	I1205 20:27:09.121133  831680 system_pods.go:89] "csi-hostpath-resizer-0" [e9117e43-09b3-4a31-8336-6610a83137be] Running
	I1205 20:27:09.121137  831680 system_pods.go:89] "csi-hostpathplugin-xjjqm" [e76e7df4-19a0-4da7-959e-77806daa2ad0] Running
	I1205 20:27:09.121140  831680 system_pods.go:89] "etcd-addons-583828" [0e09f289-f6cc-4d00-8613-be519b92139f] Running
	I1205 20:27:09.121144  831680 system_pods.go:89] "kindnet-dfgk2" [853b95db-fec0-426a-809a-05c807358dfa] Running
	I1205 20:27:09.121148  831680 system_pods.go:89] "kube-apiserver-addons-583828" [3efa3769-d977-4896-922f-f11b696b2661] Running
	I1205 20:27:09.121152  831680 system_pods.go:89] "kube-controller-manager-addons-583828" [c763df0e-ccca-4c39-bf2f-a7e3393f34db] Running
	I1205 20:27:09.121155  831680 system_pods.go:89] "kube-ingress-dns-minikube" [7fdb2265-3f78-4fd7-9f95-2ee7d4361c8c] Running
	I1205 20:27:09.121159  831680 system_pods.go:89] "kube-proxy-7b2sh" [80fbfc76-9441-46fa-b36f-0b4c43010444] Running
	I1205 20:27:09.121162  831680 system_pods.go:89] "kube-scheduler-addons-583828" [5c1ad2e6-957a-4098-b3b6-efe050ca5709] Running
	I1205 20:27:09.121169  831680 system_pods.go:89] "metrics-server-84c5f94fbc-lc9cp" [30aaf999-d2c9-45af-b24e-e74e1c57353b] Running
	I1205 20:27:09.121175  831680 system_pods.go:89] "nvidia-device-plugin-daemonset-5zspz" [640da076-aa23-44e4-8e0d-03530daed62f] Running
	I1205 20:27:09.121179  831680 system_pods.go:89] "registry-66c9cd494c-z49gz" [fe21bb58-8336-4e34-b5f4-ad786e9a2fac] Running
	I1205 20:27:09.121182  831680 system_pods.go:89] "registry-proxy-fzjzn" [6dd2b29c-df34-4531-be7e-32c564376c8d] Running
	I1205 20:27:09.121186  831680 system_pods.go:89] "snapshot-controller-56fcc65765-9xqwt" [56140c8a-3229-4005-b2ff-25c148dd6e76] Running
	I1205 20:27:09.121194  831680 system_pods.go:89] "snapshot-controller-56fcc65765-wwprs" [cc942f85-fc68-4c97-a27b-fc783a1ae47c] Running
	I1205 20:27:09.121197  831680 system_pods.go:89] "storage-provisioner" [bc98964a-3b9e-4e28-8503-ef8578884db4] Running
	I1205 20:27:09.121205  831680 system_pods.go:126] duration metric: took 8.913738ms to wait for k8s-apps to be running ...
	I1205 20:27:09.121212  831680 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:27:09.121264  831680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:27:09.133668  831680 system_svc.go:56] duration metric: took 12.443276ms WaitForService to wait for kubelet
	I1205 20:27:09.133703  831680 kubeadm.go:582] duration metric: took 1m46.803049203s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:27:09.133727  831680 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:27:09.136734  831680 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 20:27:09.136766  831680 node_conditions.go:123] node cpu capacity is 8
	I1205 20:27:09.136783  831680 node_conditions.go:105] duration metric: took 3.050647ms to run NodePressure ...
	I1205 20:27:09.136798  831680 start.go:241] waiting for startup goroutines ...
	I1205 20:27:09.136807  831680 start.go:246] waiting for cluster config update ...
	I1205 20:27:09.136828  831680 start.go:255] writing updated cluster config ...
	I1205 20:27:09.137171  831680 ssh_runner.go:195] Run: rm -f paused
	I1205 20:27:09.190358  831680 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:27:09.193651  831680 out.go:177] * Done! kubectl is now configured to use "addons-583828" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 20:33:20 addons-583828 crio[1025]: time="2024-12-05 20:33:20.738949980Z" level=info msg="Image docker.io/nginx:alpine not found" id=09d5c830-dcd4-440c-b3ce-14fd89d156bf name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:33:32 addons-583828 crio[1025]: time="2024-12-05 20:33:32.738759264Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b167033f-dc7f-44b5-b6f9-3e000d1f126b name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:33:32 addons-583828 crio[1025]: time="2024-12-05 20:33:32.739027272Z" level=info msg="Image docker.io/nginx:alpine not found" id=b167033f-dc7f-44b5-b6f9-3e000d1f126b name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:33:43 addons-583828 crio[1025]: time="2024-12-05 20:33:43.738780964Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=abaa8b0f-ace6-4629-9f72-df51a1eafb8a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:33:43 addons-583828 crio[1025]: time="2024-12-05 20:33:43.739075325Z" level=info msg="Image docker.io/nginx:alpine not found" id=abaa8b0f-ace6-4629-9f72-df51a1eafb8a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:33:56 addons-583828 crio[1025]: time="2024-12-05 20:33:56.738096514Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=75f69623-4d13-47f5-b43e-27b0600c1f96 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:33:56 addons-583828 crio[1025]: time="2024-12-05 20:33:56.738407526Z" level=info msg="Image docker.io/nginx:alpine not found" id=75f69623-4d13-47f5-b43e-27b0600c1f96 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:34:09 addons-583828 crio[1025]: time="2024-12-05 20:34:09.738713474Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=572ec905-5ce7-4299-9283-8e5644404455 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:34:09 addons-583828 crio[1025]: time="2024-12-05 20:34:09.738944471Z" level=info msg="Image docker.io/nginx:alpine not found" id=572ec905-5ce7-4299-9283-8e5644404455 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:34:24 addons-583828 crio[1025]: time="2024-12-05 20:34:24.738202495Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=fee28259-c484-4210-a3ff-f359307408be name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:34:24 addons-583828 crio[1025]: time="2024-12-05 20:34:24.738488620Z" level=info msg="Image docker.io/nginx:alpine not found" id=fee28259-c484-4210-a3ff-f359307408be name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:34:24 addons-583828 crio[1025]: time="2024-12-05 20:34:24.739060708Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=67c4edc5-5d3c-4d17-80e0-548a328e0af9 name=/runtime.v1.ImageService/PullImage
	Dec 05 20:34:24 addons-583828 crio[1025]: time="2024-12-05 20:34:24.755975179Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Dec 05 20:35:09 addons-583828 crio[1025]: time="2024-12-05 20:35:09.737871473Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=ac48147b-3051-49b0-92c5-df5a002d9552 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:35:09 addons-583828 crio[1025]: time="2024-12-05 20:35:09.738144347Z" level=info msg="Image docker.io/nginx:alpine not found" id=ac48147b-3051-49b0-92c5-df5a002d9552 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:35:24 addons-583828 crio[1025]: time="2024-12-05 20:35:24.738321074Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=0f3ebf2d-b72c-49dd-8b97-a233f5170f2a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:35:24 addons-583828 crio[1025]: time="2024-12-05 20:35:24.738624702Z" level=info msg="Image docker.io/nginx:alpine not found" id=0f3ebf2d-b72c-49dd-8b97-a233f5170f2a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:35:38 addons-583828 crio[1025]: time="2024-12-05 20:35:38.738660741Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=a2dd555d-3b6c-4290-92d4-4a151536911f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:35:38 addons-583828 crio[1025]: time="2024-12-05 20:35:38.738907525Z" level=info msg="Image docker.io/nginx:alpine not found" id=a2dd555d-3b6c-4290-92d4-4a151536911f name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:35:52 addons-583828 crio[1025]: time="2024-12-05 20:35:52.737984778Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b4c94e7c-35fe-48d1-a125-547c45da95fa name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:35:52 addons-583828 crio[1025]: time="2024-12-05 20:35:52.738283133Z" level=info msg="Image docker.io/nginx:alpine not found" id=b4c94e7c-35fe-48d1-a125-547c45da95fa name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:36:06 addons-583828 crio[1025]: time="2024-12-05 20:36:06.739028910Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=d09e3865-8d98-4ae6-9dc7-8354ca448598 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:36:06 addons-583828 crio[1025]: time="2024-12-05 20:36:06.739314700Z" level=info msg="Image docker.io/nginx:alpine not found" id=d09e3865-8d98-4ae6-9dc7-8354ca448598 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:36:17 addons-583828 crio[1025]: time="2024-12-05 20:36:17.737685094Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=4f40e7b2-0316-4eb3-a5ca-67ce5e7ae973 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:36:17 addons-583828 crio[1025]: time="2024-12-05 20:36:17.738039099Z" level=info msg="Image docker.io/nginx:alpine not found" id=4f40e7b2-0316-4eb3-a5ca-67ce5e7ae973 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dd200ee7a91a4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          9 minutes ago       Running             busybox                   0                   4becac5591990       busybox
	3ee2ba2ec2cef       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             9 minutes ago       Running             controller                0                   e2282262607e1       ingress-nginx-controller-5f85ff4588-c4fhh
	cd1af0bd98187       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             9 minutes ago       Exited              patch                     3                   3d641e4195ff0       ingress-nginx-admission-patch-n769w
	6a529b44bf189       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   10 minutes ago      Exited              create                    0                   f9d6654f6e519       ingress-nginx-admission-create-qdcz4
	19290d766dd43       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns      0                   c4033dafe6f49       kube-ingress-dns-minikube
	ef40984194282       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner       0                   4dd5b84d1fd29       storage-provisioner
	978912424ba57       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             10 minutes ago      Running             coredns                   0                   21cfb5b0d810f       coredns-7c65d6cfc9-dkkxw
	ad993918bb3ca       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16                           10 minutes ago      Running             kindnet-cni               0                   94fa1d19b901b       kindnet-dfgk2
	42459303e80f3       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             10 minutes ago      Running             kube-proxy                0                   532032805d930       kube-proxy-7b2sh
	98a4ad0de8f4c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             11 minutes ago      Running             kube-apiserver            0                   82de1aca89145       kube-apiserver-addons-583828
	feeb541e697ad       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             11 minutes ago      Running             etcd                      0                   6b8b546dc20c4       etcd-addons-583828
	c841c0b382894       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             11 minutes ago      Running             kube-scheduler            0                   b4ff6cab61172       kube-scheduler-addons-583828
	554c27961eea1       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             11 minutes ago      Running             kube-controller-manager   0                   5b89e979ff58f       kube-controller-manager-addons-583828
	
	
	==> coredns [978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629] <==
	[INFO] 10.244.0.19:56539 - 10779 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099077s
	[INFO] 10.244.0.19:60113 - 62198 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004658074s
	[INFO] 10.244.0.19:60113 - 62439 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004793049s
	[INFO] 10.244.0.19:42897 - 16947 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005879233s
	[INFO] 10.244.0.19:42897 - 16668 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006192356s
	[INFO] 10.244.0.19:52883 - 60026 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00578623s
	[INFO] 10.244.0.19:52883 - 59813 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005887045s
	[INFO] 10.244.0.19:41021 - 42702 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000078591s
	[INFO] 10.244.0.19:41021 - 42253 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121762s
	[INFO] 10.244.0.21:49427 - 55970 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000230552s
	[INFO] 10.244.0.21:42081 - 61281 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000288918s
	[INFO] 10.244.0.21:46440 - 30975 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000166915s
	[INFO] 10.244.0.21:54236 - 31133 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00016539s
	[INFO] 10.244.0.21:50537 - 63442 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123188s
	[INFO] 10.244.0.21:59373 - 27825 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000153511s
	[INFO] 10.244.0.21:46412 - 12778 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005744036s
	[INFO] 10.244.0.21:55115 - 16737 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.006700755s
	[INFO] 10.244.0.21:55800 - 44793 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00487147s
	[INFO] 10.244.0.21:55627 - 40386 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005406998s
	[INFO] 10.244.0.21:60313 - 33442 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007137006s
	[INFO] 10.244.0.21:53320 - 23314 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007268539s
	[INFO] 10.244.0.21:45779 - 19345 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000886948s
	[INFO] 10.244.0.21:34651 - 50515 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001026868s
	[INFO] 10.244.0.25:60752 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000382497s
	[INFO] 10.244.0.25:35918 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00019061s
	
	
	==> describe nodes <==
	Name:               addons-583828
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-583828
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=addons-583828
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_25_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-583828
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:25:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-583828
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:36:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:33:26 +0000   Thu, 05 Dec 2024 20:25:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:33:26 +0000   Thu, 05 Dec 2024 20:25:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:33:26 +0000   Thu, 05 Dec 2024 20:25:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:33:26 +0000   Thu, 05 Dec 2024 20:25:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-583828
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5cdc1a1dcb246fca33732e03f1ddc97
	  System UUID:                49ad83b1-9a0e-4726-8ae1-8ba9c7e57d54
	  Boot ID:                    39024a98-8447-46b2-bbc5-7915429b9c2d
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-c4fhh    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-dkkxw                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-addons-583828                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-dfgk2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-addons-583828                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-583828        200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-7b2sh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-583828                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 10m   kube-proxy       
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node addons-583828 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node addons-583828 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node addons-583828 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m   node-controller  Node addons-583828 event: Registered Node addons-583828 in Controller
	  Normal   NodeReady                10m   kubelet          Node addons-583828 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 9e 58 22 0d b9 08 06
	[ +28.753910] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 78 7a 98 fe 25 08 06
	[  +1.292059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e 28 6f da 79 a6 08 06
	[  +0.021715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e c3 0d 92 91 5a 08 06
	[Dec 5 20:11] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 58 3b a6 8d 40 08 06
	[ +30.901947] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 3c 09 52 3d e1 08 06
	[  +1.444771] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 03 05 4c 3e 73 08 06
	[  +0.058589] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 48 98 e5 23 33 08 06
	[  +6.156143] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 10 f3 a9 91 d9 08 06
	[Dec 5 20:12] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 18 0d f3 3a 83 08 06
	[  +1.482986] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce c3 68 13 fd 23 08 06
	[  +0.033369] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 8a 70 ff f0 d7 08 06
	[  +6.306172] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca ef 8b ac b6 8f 08 06
	
	
	==> etcd [feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201] <==
	{"level":"info","ts":"2024-12-05T20:25:26.732364Z","caller":"traceutil/trace.go:171","msg":"trace[1932728294] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"199.340257ms","start":"2024-12-05T20:25:26.533012Z","end":"2024-12-05T20:25:26.732352Z","steps":["trace[1932728294] 'process raft request'  (duration: 193.537191ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:25:26.809222Z","caller":"traceutil/trace.go:171","msg":"trace[1215879915] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"276.099106ms","start":"2024-12-05T20:25:26.533090Z","end":"2024-12-05T20:25:26.809189Z","steps":["trace[1215879915] 'process raft request'  (duration: 193.494264ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:25:26.809623Z","caller":"traceutil/trace.go:171","msg":"trace[1551307167] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"276.289138ms","start":"2024-12-05T20:25:26.533317Z","end":"2024-12-05T20:25:26.809606Z","steps":["trace[1551307167] 'process raft request'  (duration: 193.349913ms)","trace[1551307167] 'compare'  (duration: 82.407897ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T20:25:26.809944Z","caller":"traceutil/trace.go:171","msg":"trace[1466606709] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"100.393382ms","start":"2024-12-05T20:25:26.709540Z","end":"2024-12-05T20:25:26.809933Z","steps":["trace[1466606709] 'process raft request'  (duration: 99.691152ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:25:26.809949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.397075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:25:26.812524Z","caller":"traceutil/trace.go:171","msg":"trace[766908047] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:439; }","duration":"200.97222ms","start":"2024-12-05T20:25:26.611533Z","end":"2024-12-05T20:25:26.812506Z","steps":["trace[766908047] 'agreement among raft nodes before linearized reading'  (duration: 198.378746ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:25:26.809689Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.177847ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:25:26.819703Z","caller":"traceutil/trace.go:171","msg":"trace[87578902] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:0; response_revision:439; }","duration":"286.184954ms","start":"2024-12-05T20:25:26.533487Z","end":"2024-12-05T20:25:26.819672Z","steps":["trace[87578902] 'agreement among raft nodes before linearized reading'  (duration: 275.942532ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:25:26.809965Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.492589ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/local-path-provisioner-role\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:25:26.820828Z","caller":"traceutil/trace.go:171","msg":"trace[681513394] range","detail":"{range_begin:/registry/clusterroles/local-path-provisioner-role; range_end:; response_count:0; response_revision:439; }","duration":"111.354421ms","start":"2024-12-05T20:25:26.709456Z","end":"2024-12-05T20:25:26.820810Z","steps":["trace[681513394] 'agreement among raft nodes before linearized reading'  (duration: 100.447614ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:25:26.810157Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.487452ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:25:26.810302Z","caller":"traceutil/trace.go:171","msg":"trace[503414887] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"100.686269ms","start":"2024-12-05T20:25:26.709603Z","end":"2024-12-05T20:25:26.810290Z","steps":["trace[503414887] 'process raft request'  (duration: 99.688666ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:25:26.828003Z","caller":"traceutil/trace.go:171","msg":"trace[1437894167] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:439; }","duration":"118.320015ms","start":"2024-12-05T20:25:26.709659Z","end":"2024-12-05T20:25:26.827979Z","steps":["trace[1437894167] 'agreement among raft nodes before linearized reading'  (duration: 100.478382ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:26:28.626249Z","caller":"traceutil/trace.go:171","msg":"trace[88854942] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"116.744993ms","start":"2024-12-05T20:26:28.509486Z","end":"2024-12-05T20:26:28.626231Z","steps":["trace[88854942] 'process raft request'  (duration: 116.610104ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:26:28.626441Z","caller":"traceutil/trace.go:171","msg":"trace[1904650908] linearizableReadLoop","detail":"{readStateIndex:1192; appliedIndex:1192; }","duration":"116.655701ms","start":"2024-12-05T20:26:28.509773Z","end":"2024-12-05T20:26:28.626429Z","steps":["trace[1904650908] 'read index received'  (duration: 116.649101ms)","trace[1904650908] 'applied index is now lower than readState.Index'  (duration: 5.275µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T20:26:28.626551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.750114ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:26:28.627089Z","caller":"traceutil/trace.go:171","msg":"trace[795612741] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:1160; }","duration":"117.303042ms","start":"2024-12-05T20:26:28.509768Z","end":"2024-12-05T20:26:28.627071Z","steps":["trace[795612741] 'agreement among raft nodes before linearized reading'  (duration: 116.695105ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:26:28.627970Z","caller":"traceutil/trace.go:171","msg":"trace[452118737] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"118.020962ms","start":"2024-12-05T20:26:28.509930Z","end":"2024-12-05T20:26:28.627951Z","steps":["trace[452118737] 'process raft request'  (duration: 117.908388ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:26:28.627986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.585638ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:26:28.628022Z","caller":"traceutil/trace.go:171","msg":"trace[643759814] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1161; }","duration":"106.630714ms","start":"2024-12-05T20:26:28.521382Z","end":"2024-12-05T20:26:28.628013Z","steps":["trace[643759814] 'agreement among raft nodes before linearized reading'  (duration: 106.548916ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:26:47.356770Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.866849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-lc9cp\" ","response":"range_response_count:1 size:4862"}
	{"level":"info","ts":"2024-12-05T20:26:47.356853Z","caller":"traceutil/trace.go:171","msg":"trace[1070485999] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-lc9cp; range_end:; response_count:1; response_revision:1237; }","duration":"104.96485ms","start":"2024-12-05T20:26:47.251869Z","end":"2024-12-05T20:26:47.356834Z","steps":["trace[1070485999] 'range keys from in-memory index tree'  (duration: 104.712038ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:35:12.932457Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2025}
	{"level":"info","ts":"2024-12-05T20:35:12.969773Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2025,"took":"36.651453ms","hash":3867691953,"current-db-size-bytes":7991296,"current-db-size":"8.0 MB","current-db-size-in-use-bytes":1814528,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-12-05T20:35:12.969834Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3867691953,"revision":2025,"compact-revision":-1}
	
	
	==> kernel <==
	 20:36:20 up  3:18,  0 users,  load average: 0.15, 0.38, 1.76
	Linux addons-583828 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c] <==
	I1205 20:34:11.114606       1 main.go:301] handling current node
	I1205 20:34:21.119723       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:34:21.119775       1 main.go:301] handling current node
	I1205 20:34:31.110209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:34:31.110252       1 main.go:301] handling current node
	I1205 20:34:41.112991       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:34:41.113030       1 main.go:301] handling current node
	I1205 20:34:51.116992       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:34:51.117039       1 main.go:301] handling current node
	I1205 20:35:01.119113       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:35:01.119163       1 main.go:301] handling current node
	I1205 20:35:11.109893       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:35:11.109948       1 main.go:301] handling current node
	I1205 20:35:21.118259       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:35:21.118301       1 main.go:301] handling current node
	I1205 20:35:31.110191       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:35:31.110236       1 main.go:301] handling current node
	I1205 20:35:41.113012       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:35:41.113074       1 main.go:301] handling current node
	I1205 20:35:51.114380       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:35:51.114425       1 main.go:301] handling current node
	I1205 20:36:01.119473       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:36:01.119516       1 main.go:301] handling current node
	I1205 20:36:11.113000       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:36:11.113075       1 main.go:301] handling current node
	
	
	==> kube-apiserver [98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889] <==
	 > logger="UnhandledError"
	I1205 20:27:03.809289       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1205 20:27:19.082649       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37954: use of closed network connection
	E1205 20:27:19.260421       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37990: use of closed network connection
	I1205 20:27:28.347350       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.86.24"}
	I1205 20:28:01.484387       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1205 20:28:06.071833       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1205 20:28:13.486077       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1205 20:28:14.502367       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1205 20:28:18.964986       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1205 20:28:19.165810       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.167.39"}
	I1205 20:28:27.470478       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:28:27.470622       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 20:28:27.484594       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:28:27.484744       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 20:28:27.486350       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:28:27.486389       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 20:28:27.530742       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:28:27.530900       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 20:28:27.622937       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:28:27.622982       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1205 20:28:28.486747       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1205 20:28:28.623008       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1205 20:28:28.725732       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1205 20:33:04.823537       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578] <==
	E1205 20:34:00.053133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:34:08.447924       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:34:08.447973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:34:21.945233       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:34:21.945284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:34:22.030249       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:34:22.030300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:34:34.925912       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:34:34.925962       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:34:59.916624       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:34:59.916671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:35:07.503136       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:35:07.503194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:35:09.201477       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:35:09.201526       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:35:23.736402       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:35:23.736449       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:35:39.522820       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:35:39.522872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:35:44.848086       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:35:44.848135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:36:04.313798       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:36:04.313855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:36:14.312291       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:36:14.312352       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8] <==
	I1205 20:25:22.519810       1 server_linux.go:66] "Using iptables proxy"
	I1205 20:25:23.214814       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1205 20:25:23.214968       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:25:24.815600       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 20:25:24.815822       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:25:25.127338       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:25:25.217908       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:25:25.217961       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:25:25.220071       1 config.go:199] "Starting service config controller"
	I1205 20:25:25.220167       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:25:25.220233       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:25:25.220260       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:25:25.220993       1 config.go:328] "Starting node config controller"
	I1205 20:25:25.221110       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:25:25.320398       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:25:25.510134       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:25:25.522410       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01] <==
	W1205 20:25:14.423700       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:25:14.423729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:25:14.423878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:25:14.423921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:25:14.423953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1205 20:25:14.423888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 20:25:14.423991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E1205 20:25:14.423992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1205 20:25:14.424025       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1205 20:25:14.424048       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 20:25:14.424056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:25:14.424079       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1205 20:25:14.424047       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:25:14.424111       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1205 20:25:14.424111       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1205 20:25:14.424080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:25:15.289746       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:25:15.289800       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 20:25:15.305483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 20:25:15.305528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:25:15.474192       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:25:15.474234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:25:15.508708       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:25:15.508751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1205 20:25:18.520013       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 20:35:04 addons-583828 kubelet[1621]: I1205 20:35:04.738218    1621 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 20:35:06 addons-583828 kubelet[1621]: E1205 20:35:06.974303    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430906973995338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:35:06 addons-583828 kubelet[1621]: E1205 20:35:06.974339    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430906973995338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:35:09 addons-583828 kubelet[1621]: E1205 20:35:09.738502    1621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="06f0ad05-fff2-461e-9051-b1a79714bd25"
	Dec 05 20:35:16 addons-583828 kubelet[1621]: E1205 20:35:16.830933    1621 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/23a3cfafc9ea2e4a4989172f2e090a0ee839d5066be84b1bc6d50704ff2f896c, memory: /docker/23a3cfafc9ea2e4a4989172f2e090a0ee839d5066be84b1bc6d50704ff2f896c/system.slice/kubelet.service"
	Dec 05 20:35:16 addons-583828 kubelet[1621]: E1205 20:35:16.976648    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430916976358737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:35:16 addons-583828 kubelet[1621]: E1205 20:35:16.976681    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430916976358737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:35:24 addons-583828 kubelet[1621]: E1205 20:35:24.738874    1621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="06f0ad05-fff2-461e-9051-b1a79714bd25"
	Dec 05 20:35:26 addons-583828 kubelet[1621]: E1205 20:35:26.979174    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430926978909489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:35:26 addons-583828 kubelet[1621]: E1205 20:35:26.979212    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430926978909489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:35:36 addons-583828 kubelet[1621]: E1205 20:35:36.982136    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430936981856794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:35:36 addons-583828 kubelet[1621]: E1205 20:35:36.982178    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430936981856794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:35:38 addons-583828 kubelet[1621]: E1205 20:35:38.739145    1621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="06f0ad05-fff2-461e-9051-b1a79714bd25"
	Dec 05 20:35:46 addons-583828 kubelet[1621]: E1205 20:35:46.985226    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430946984940842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:35:46 addons-583828 kubelet[1621]: E1205 20:35:46.985285    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430946984940842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:35:52 addons-583828 kubelet[1621]: E1205 20:35:52.738555    1621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="06f0ad05-fff2-461e-9051-b1a79714bd25"
	Dec 05 20:35:56 addons-583828 kubelet[1621]: E1205 20:35:56.988465    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430956988156061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:35:56 addons-583828 kubelet[1621]: E1205 20:35:56.988501    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430956988156061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:36:06 addons-583828 kubelet[1621]: E1205 20:36:06.739564    1621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="06f0ad05-fff2-461e-9051-b1a79714bd25"
	Dec 05 20:36:06 addons-583828 kubelet[1621]: E1205 20:36:06.991541    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430966991271333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:36:06 addons-583828 kubelet[1621]: E1205 20:36:06.991582    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430966991271333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:36:11 addons-583828 kubelet[1621]: I1205 20:36:11.738102    1621 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 20:36:16 addons-583828 kubelet[1621]: E1205 20:36:16.993840    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430976993588664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:36:16 addons-583828 kubelet[1621]: E1205 20:36:16.993872    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430976993588664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:36:17 addons-583828 kubelet[1621]: E1205 20:36:17.738351    1621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="06f0ad05-fff2-461e-9051-b1a79714bd25"
	
	
	==> storage-provisioner [ef4098419428227b4d6972e656cc06bea872aea3e97c16b0c7340af1fd6d5cb5] <==
	I1205 20:25:42.556706       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:25:42.566741       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:25:42.566799       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:25:42.613971       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:25:42.614127       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"611609cd-4cc0-441e-94ab-a2e2be13b4e9", APIVersion:"v1", ResourceVersion:"894", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-583828_494b9d68-e7cd-4c8a-a94c-6c912f7efe5f became leader
	I1205 20:25:42.614218       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-583828_494b9d68-e7cd-4c8a-a94c-6c912f7efe5f!
	I1205 20:25:42.715334       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-583828_494b9d68-e7cd-4c8a-a94c-6c912f7efe5f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-583828 -n addons-583828
helpers_test.go:261: (dbg) Run:  kubectl --context addons-583828 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx ingress-nginx-admission-create-qdcz4 ingress-nginx-admission-patch-n769w
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-583828 describe pod nginx ingress-nginx-admission-create-qdcz4 ingress-nginx-admission-patch-n769w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-583828 describe pod nginx ingress-nginx-admission-create-qdcz4 ingress-nginx-admission-patch-n769w: exit status 1 (70.552175ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-583828/192.168.49.2
	Start Time:       Thu, 05 Dec 2024 20:28:19 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2wdtd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2wdtd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m2s                   default-scheduler  Successfully assigned default/nginx to addons-583828
	  Warning  Failed     7m29s                  kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m47s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m26s (x4 over 8m2s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m25s (x4 over 7m29s)  kubelet            Error: ErrImagePull
	  Warning  Failed     3m25s (x2 over 5m18s)  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    3m1s (x7 over 7m29s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m1s (x7 over 7m29s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qdcz4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-n769w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-583828 describe pod nginx ingress-nginx-admission-create-qdcz4 ingress-nginx-admission-patch-n769w: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-583828 addons disable ingress-dns --alsologtostderr -v=1: (1.110352722s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-583828 addons disable ingress --alsologtostderr -v=1: (7.654736658s)
--- FAIL: TestAddons/parallel/Ingress (491.75s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (320s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.260838ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-lc9cp" [30aaf999-d2c9-45af-b24e-e74e1c57353b] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003368259s
addons_test.go:402: (dbg) Run:  kubectl --context addons-583828 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-583828 top pods -n kube-system: exit status 1 (77.322732ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dkkxw, age: 2m11.708911347s

                                                
                                                
** /stderr **
I1205 20:27:33.711901  830381 retry.go:31] will retry after 2.116137386s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-583828 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-583828 top pods -n kube-system: exit status 1 (70.416265ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dkkxw, age: 2m13.895866865s

                                                
                                                
** /stderr **
I1205 20:27:35.898907  830381 retry.go:31] will retry after 3.459393721s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-583828 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-583828 top pods -n kube-system: exit status 1 (73.636755ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dkkxw, age: 2m17.429794225s

                                                
                                                
** /stderr **
I1205 20:27:39.432811  830381 retry.go:31] will retry after 5.203542554s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-583828 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-583828 top pods -n kube-system: exit status 1 (66.954419ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dkkxw, age: 2m22.700957721s

                                                
                                                
** /stderr **
I1205 20:27:44.703649  830381 retry.go:31] will retry after 11.136561519s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-583828 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-583828 top pods -n kube-system: exit status 1 (65.927957ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dkkxw, age: 2m33.904586217s

                                                
                                                
** /stderr **
I1205 20:27:55.907241  830381 retry.go:31] will retry after 10.468727506s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-583828 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-583828 top pods -n kube-system: exit status 1 (69.648603ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dkkxw, age: 2m44.443735338s

                                                
                                                
** /stderr **
I1205 20:28:06.446316  830381 retry.go:31] will retry after 18.80278576s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-583828 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-583828 top pods -n kube-system: exit status 1 (65.855494ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dkkxw, age: 3m3.313131509s

                                                
                                                
** /stderr **
I1205 20:28:25.315967  830381 retry.go:31] will retry after 45.734899557s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-583828 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-583828 top pods -n kube-system: exit status 1 (65.350787ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dkkxw, age: 3m49.113604701s

                                                
                                                
** /stderr **
I1205 20:29:11.116564  830381 retry.go:31] will retry after 1m3.631544048s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-583828 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-583828 top pods -n kube-system: exit status 1 (65.324181ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dkkxw, age: 4m52.811490401s

                                                
                                                
** /stderr **
I1205 20:30:14.814607  830381 retry.go:31] will retry after 1m11.306279878s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-583828 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-583828 top pods -n kube-system: exit status 1 (64.680506ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dkkxw, age: 6m4.183733188s

                                                
                                                
** /stderr **
I1205 20:31:26.186486  830381 retry.go:31] will retry after 1m18.687788653s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-583828 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-583828 top pods -n kube-system: exit status 1 (64.489774ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dkkxw, age: 7m22.937331909s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-583828
helpers_test.go:235: (dbg) docker inspect addons-583828:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23a3cfafc9ea2e4a4989172f2e090a0ee839d5066be84b1bc6d50704ff2f896c",
	        "Created": "2024-12-05T20:25:03.731974458Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 832431,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-05T20:25:03.852223097Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/23a3cfafc9ea2e4a4989172f2e090a0ee839d5066be84b1bc6d50704ff2f896c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23a3cfafc9ea2e4a4989172f2e090a0ee839d5066be84b1bc6d50704ff2f896c/hostname",
	        "HostsPath": "/var/lib/docker/containers/23a3cfafc9ea2e4a4989172f2e090a0ee839d5066be84b1bc6d50704ff2f896c/hosts",
	        "LogPath": "/var/lib/docker/containers/23a3cfafc9ea2e4a4989172f2e090a0ee839d5066be84b1bc6d50704ff2f896c/23a3cfafc9ea2e4a4989172f2e090a0ee839d5066be84b1bc6d50704ff2f896c-json.log",
	        "Name": "/addons-583828",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-583828:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-583828",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/41afe785c57b7f74990df950dd572a0d9a1bcbca1dc031bd09c84e239db1fcf7-init/diff:/var/lib/docker/overlay2/0f5bc7fa09e0d0f29301db80becc3339e358e049d584dfb307a79bde49527770/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41afe785c57b7f74990df950dd572a0d9a1bcbca1dc031bd09c84e239db1fcf7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41afe785c57b7f74990df950dd572a0d9a1bcbca1dc031bd09c84e239db1fcf7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41afe785c57b7f74990df950dd572a0d9a1bcbca1dc031bd09c84e239db1fcf7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-583828",
	                "Source": "/var/lib/docker/volumes/addons-583828/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-583828",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-583828",
	                "name.minikube.sigs.k8s.io": "addons-583828",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed8e9041c46e14e30b26a0df885e68a7c08fd77cec87c90c7104a1f8f7ab0f11",
	            "SandboxKey": "/var/run/docker/netns/ed8e9041c46e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-583828": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "4f3a539ee7c46d697dbcb6db4f5ef0224be703b3ddf3422109c24e64c1203597",
	                    "EndpointID": "9cfe539f71d453e8e613ddcbc480d964c3fe844770a905f4c038b3334cdb549c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-583828",
	                        "23a3cfafc9ea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-583828 -n addons-583828
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-583828 logs -n 25: (1.206432718s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-949612                                                                     | download-only-949612   | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC | 05 Dec 24 20:24 UTC |
	| delete  | -p download-only-350205                                                                     | download-only-350205   | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC | 05 Dec 24 20:24 UTC |
	| delete  | -p download-only-949612                                                                     | download-only-949612   | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC | 05 Dec 24 20:24 UTC |
	| start   | --download-only -p                                                                          | download-docker-384641 | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC |                     |
	|         | download-docker-384641                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-384641                                                                   | download-docker-384641 | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC | 05 Dec 24 20:24 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-451629   | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC |                     |
	|         | binary-mirror-451629                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40015                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-451629                                                                     | binary-mirror-451629   | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC | 05 Dec 24 20:24 UTC |
	| addons  | disable dashboard -p                                                                        | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC |                     |
	|         | addons-583828                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC |                     |
	|         | addons-583828                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-583828 --wait=true                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC | 05 Dec 24 20:27 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-583828 addons disable                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-583828 addons disable                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | -p addons-583828                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-583828 addons disable                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-583828 addons disable                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-583828 ip                                                                            | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	| addons  | addons-583828 addons disable                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-583828 ssh cat                                                                       | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | /opt/local-path-provisioner/pvc-7e18edaf-3638-4016-8b18-2b20bbc1377b_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-583828 addons disable                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:28 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-583828 addons disable                                                                | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-583828 addons                                                                        | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:28 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-583828 addons                                                                        | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:28 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-583828 addons                                                                        | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:28 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-583828 addons                                                                        | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:28 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-583828 addons                                                                        | addons-583828          | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:28 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:24:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:24:39.691689  831680 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:24:39.691822  831680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:24:39.691831  831680 out.go:358] Setting ErrFile to fd 2...
	I1205 20:24:39.691836  831680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:24:39.692053  831680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
	I1205 20:24:39.692671  831680 out.go:352] Setting JSON to false
	I1205 20:24:39.693712  831680 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11229,"bootTime":1733419051,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:24:39.693778  831680 start.go:139] virtualization: kvm guest
	I1205 20:24:39.696017  831680 out.go:177] * [addons-583828] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:24:39.697327  831680 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:24:39.697325  831680 notify.go:220] Checking for updates...
	I1205 20:24:39.699990  831680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:24:39.701330  831680 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig
	I1205 20:24:39.702525  831680 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube
	I1205 20:24:39.703779  831680 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:24:39.705057  831680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:24:39.706350  831680 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:24:39.728535  831680 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 20:24:39.728623  831680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:24:39.774619  831680 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:46 SystemTime:2024-12-05 20:24:39.765310734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:24:39.774735  831680 docker.go:318] overlay module found
	I1205 20:24:39.776947  831680 out.go:177] * Using the docker driver based on user configuration
	I1205 20:24:39.778398  831680 start.go:297] selected driver: docker
	I1205 20:24:39.778411  831680 start.go:901] validating driver "docker" against <nil>
	I1205 20:24:39.778423  831680 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:24:39.779287  831680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:24:39.824840  831680 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:46 SystemTime:2024-12-05 20:24:39.816379028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:24:39.825032  831680 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:24:39.825280  831680 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:24:39.826853  831680 out.go:177] * Using Docker driver with root privileges
	I1205 20:24:39.828291  831680 cni.go:84] Creating CNI manager for ""
	I1205 20:24:39.828357  831680 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 20:24:39.828370  831680 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 20:24:39.828426  831680 start.go:340] cluster config:
	{Name:addons-583828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-583828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:24:39.829707  831680 out.go:177] * Starting "addons-583828" primary control-plane node in "addons-583828" cluster
	I1205 20:24:39.830844  831680 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 20:24:39.832294  831680 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1205 20:24:39.833408  831680 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:24:39.833446  831680 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-823623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:24:39.833457  831680 cache.go:56] Caching tarball of preloaded images
	I1205 20:24:39.833500  831680 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1205 20:24:39.833554  831680 preload.go:172] Found /home/jenkins/minikube-integration/20053-823623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:24:39.833568  831680 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:24:39.833952  831680 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/config.json ...
	I1205 20:24:39.833979  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/config.json: {Name:mka9ab8b23a164b9c916173a422ec994cf906b5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:24:39.849777  831680 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1205 20:24:39.849960  831680 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1205 20:24:39.849979  831680 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1205 20:24:39.849985  831680 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1205 20:24:39.850000  831680 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1205 20:24:39.850012  831680 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from local cache
	I1205 20:24:51.691581  831680 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from cached tarball
	I1205 20:24:51.691629  831680 cache.go:194] Successfully downloaded all kic artifacts
	I1205 20:24:51.691666  831680 start.go:360] acquireMachinesLock for addons-583828: {Name:mk4ded944d810c830c5a1bda8a8a9c5dc897e3c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:24:51.691784  831680 start.go:364] duration metric: took 81.79µs to acquireMachinesLock for "addons-583828"
	I1205 20:24:51.691809  831680 start.go:93] Provisioning new machine with config: &{Name:addons-583828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-583828 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:24:51.691877  831680 start.go:125] createHost starting for "" (driver="docker")
	I1205 20:24:51.693764  831680 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1205 20:24:51.694034  831680 start.go:159] libmachine.API.Create for "addons-583828" (driver="docker")
	I1205 20:24:51.694083  831680 client.go:168] LocalClient.Create starting
	I1205 20:24:51.694178  831680 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca.pem
	I1205 20:24:51.945489  831680 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/cert.pem
	I1205 20:24:52.057379  831680 cli_runner.go:164] Run: docker network inspect addons-583828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 20:24:52.074202  831680 cli_runner.go:211] docker network inspect addons-583828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 20:24:52.074333  831680 network_create.go:284] running [docker network inspect addons-583828] to gather additional debugging logs...
	I1205 20:24:52.074367  831680 cli_runner.go:164] Run: docker network inspect addons-583828
	W1205 20:24:52.090008  831680 cli_runner.go:211] docker network inspect addons-583828 returned with exit code 1
	I1205 20:24:52.090114  831680 network_create.go:287] error running [docker network inspect addons-583828]: docker network inspect addons-583828: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-583828 not found
	I1205 20:24:52.090155  831680 network_create.go:289] output of [docker network inspect addons-583828]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-583828 not found
	
	** /stderr **
	I1205 20:24:52.090263  831680 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 20:24:52.107649  831680 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001710900}
	I1205 20:24:52.107715  831680 network_create.go:124] attempt to create docker network addons-583828 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 20:24:52.107780  831680 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-583828 addons-583828
	I1205 20:24:52.173271  831680 network_create.go:108] docker network addons-583828 192.168.49.0/24 created
	I1205 20:24:52.173316  831680 kic.go:121] calculated static IP "192.168.49.2" for the "addons-583828" container
	I1205 20:24:52.173393  831680 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 20:24:52.189532  831680 cli_runner.go:164] Run: docker volume create addons-583828 --label name.minikube.sigs.k8s.io=addons-583828 --label created_by.minikube.sigs.k8s.io=true
	I1205 20:24:52.207311  831680 oci.go:103] Successfully created a docker volume addons-583828
	I1205 20:24:52.207412  831680 cli_runner.go:164] Run: docker run --rm --name addons-583828-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-583828 --entrypoint /usr/bin/test -v addons-583828:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
	I1205 20:24:59.073934  831680 cli_runner.go:217] Completed: docker run --rm --name addons-583828-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-583828 --entrypoint /usr/bin/test -v addons-583828:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib: (6.866474622s)
	I1205 20:24:59.073970  831680 oci.go:107] Successfully prepared a docker volume addons-583828
	I1205 20:24:59.073993  831680 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:24:59.074022  831680 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 20:24:59.074089  831680 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20053-823623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-583828:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 20:25:03.666897  831680 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20053-823623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-583828:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.592733481s)
	I1205 20:25:03.666932  831680 kic.go:203] duration metric: took 4.592908745s to extract preloaded images to volume ...
	W1205 20:25:03.667072  831680 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 20:25:03.667179  831680 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 20:25:03.716322  831680 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-583828 --name addons-583828 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-583828 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-583828 --network addons-583828 --ip 192.168.49.2 --volume addons-583828:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
	I1205 20:25:04.007595  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Running}}
	I1205 20:25:04.026649  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:04.046749  831680 cli_runner.go:164] Run: docker exec addons-583828 stat /var/lib/dpkg/alternatives/iptables
	I1205 20:25:04.089416  831680 oci.go:144] the created container "addons-583828" has a running status.
	I1205 20:25:04.089450  831680 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa...
	I1205 20:25:04.279308  831680 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 20:25:04.299851  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:04.326851  831680 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 20:25:04.326876  831680 kic_runner.go:114] Args: [docker exec --privileged addons-583828 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 20:25:04.422075  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:04.445043  831680 machine.go:93] provisionDockerMachine start ...
	I1205 20:25:04.445162  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:04.468182  831680 main.go:141] libmachine: Using SSH client type: native
	I1205 20:25:04.468436  831680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1205 20:25:04.468454  831680 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:25:04.688611  831680 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-583828
	
	I1205 20:25:04.688657  831680 ubuntu.go:169] provisioning hostname "addons-583828"
	I1205 20:25:04.688741  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:04.708383  831680 main.go:141] libmachine: Using SSH client type: native
	I1205 20:25:04.708602  831680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1205 20:25:04.708622  831680 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-583828 && echo "addons-583828" | sudo tee /etc/hostname
	I1205 20:25:04.853379  831680 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-583828
	
	I1205 20:25:04.853468  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:04.871719  831680 main.go:141] libmachine: Using SSH client type: native
	I1205 20:25:04.871919  831680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1205 20:25:04.871937  831680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-583828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-583828/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-583828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:25:05.001305  831680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:25:05.001336  831680 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20053-823623/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-823623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-823623/.minikube}
	I1205 20:25:05.001367  831680 ubuntu.go:177] setting up certificates
	I1205 20:25:05.001381  831680 provision.go:84] configureAuth start
	I1205 20:25:05.001440  831680 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-583828
	I1205 20:25:05.019055  831680 provision.go:143] copyHostCerts
	I1205 20:25:05.019139  831680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-823623/.minikube/ca.pem (1078 bytes)
	I1205 20:25:05.019282  831680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-823623/.minikube/cert.pem (1123 bytes)
	I1205 20:25:05.019349  831680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-823623/.minikube/key.pem (1679 bytes)
	I1205 20:25:05.019399  831680 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-823623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca-key.pem org=jenkins.addons-583828 san=[127.0.0.1 192.168.49.2 addons-583828 localhost minikube]
	I1205 20:25:05.117161  831680 provision.go:177] copyRemoteCerts
	I1205 20:25:05.117249  831680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:25:05.117301  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:05.135132  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:05.230225  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:25:05.253790  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 20:25:05.277353  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:25:05.300966  831680 provision.go:87] duration metric: took 299.563049ms to configureAuth
	I1205 20:25:05.301009  831680 ubuntu.go:193] setting minikube options for container-runtime
	I1205 20:25:05.301200  831680 config.go:182] Loaded profile config "addons-583828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:25:05.301314  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:05.319855  831680 main.go:141] libmachine: Using SSH client type: native
	I1205 20:25:05.320072  831680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1205 20:25:05.320102  831680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:25:05.544675  831680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:25:05.544704  831680 machine.go:96] duration metric: took 1.099635008s to provisionDockerMachine
	I1205 20:25:05.544716  831680 client.go:171] duration metric: took 13.850623198s to LocalClient.Create
	I1205 20:25:05.544734  831680 start.go:167] duration metric: took 13.850702137s to libmachine.API.Create "addons-583828"
	I1205 20:25:05.544744  831680 start.go:293] postStartSetup for "addons-583828" (driver="docker")
	I1205 20:25:05.544761  831680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:25:05.544838  831680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:25:05.544881  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:05.562988  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:05.658728  831680 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:25:05.662233  831680 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 20:25:05.662282  831680 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 20:25:05.662290  831680 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 20:25:05.662298  831680 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1205 20:25:05.662313  831680 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-823623/.minikube/addons for local assets ...
	I1205 20:25:05.662379  831680 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-823623/.minikube/files for local assets ...
	I1205 20:25:05.662403  831680 start.go:296] duration metric: took 117.647983ms for postStartSetup
	I1205 20:25:05.662708  831680 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-583828
	I1205 20:25:05.681670  831680 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/config.json ...
	I1205 20:25:05.681981  831680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:25:05.682063  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:05.700607  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:05.790218  831680 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 20:25:05.794810  831680 start.go:128] duration metric: took 14.102914635s to createHost
	I1205 20:25:05.794840  831680 start.go:83] releasing machines lock for "addons-583828", held for 14.103043196s
	I1205 20:25:05.794925  831680 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-583828
	I1205 20:25:05.812280  831680 ssh_runner.go:195] Run: cat /version.json
	I1205 20:25:05.812352  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:05.812356  831680 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:25:05.812411  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:05.832282  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:05.832657  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:05.998926  831680 ssh_runner.go:195] Run: systemctl --version
	I1205 20:25:06.003471  831680 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:25:06.145021  831680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:25:06.149896  831680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:25:06.169829  831680 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 20:25:06.169931  831680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:25:06.199311  831680 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1205 20:25:06.199341  831680 start.go:495] detecting cgroup driver to use...
	I1205 20:25:06.199384  831680 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 20:25:06.199457  831680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:25:06.215640  831680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:25:06.226828  831680 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:25:06.226899  831680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:25:06.239972  831680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:25:06.254908  831680 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:25:06.331418  831680 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:25:06.416499  831680 docker.go:233] disabling docker service ...
	I1205 20:25:06.416577  831680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:25:06.436381  831680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:25:06.448155  831680 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:25:06.531116  831680 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:25:06.612073  831680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:25:06.623275  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:25:06.639478  831680 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:25:06.639550  831680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:25:06.650274  831680 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:25:06.650360  831680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:25:06.660543  831680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:25:06.670851  831680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:25:06.681702  831680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:25:06.692044  831680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:25:06.702284  831680 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:25:06.718572  831680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:25:06.728485  831680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:25:06.736834  831680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:25:06.745242  831680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:25:06.822121  831680 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:25:06.931562  831680 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:25:06.931665  831680 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:25:06.935408  831680 start.go:563] Will wait 60s for crictl version
	I1205 20:25:06.935472  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:25:06.938794  831680 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:25:06.973589  831680 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 20:25:06.973672  831680 ssh_runner.go:195] Run: crio --version
	I1205 20:25:07.010037  831680 ssh_runner.go:195] Run: crio --version
	I1205 20:25:07.047228  831680 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1205 20:25:07.048570  831680 cli_runner.go:164] Run: docker network inspect addons-583828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 20:25:07.066157  831680 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 20:25:07.070191  831680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:25:07.082490  831680 kubeadm.go:883] updating cluster {Name:addons-583828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-583828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:25:07.082616  831680 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:25:07.082667  831680 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:25:07.151041  831680 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:25:07.151071  831680 crio.go:433] Images already preloaded, skipping extraction
	I1205 20:25:07.151130  831680 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:25:07.184077  831680 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:25:07.184107  831680 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:25:07.184119  831680 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1205 20:25:07.184245  831680 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-583828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-583828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:25:07.184329  831680 ssh_runner.go:195] Run: crio config
	I1205 20:25:07.228423  831680 cni.go:84] Creating CNI manager for ""
	I1205 20:25:07.228448  831680 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 20:25:07.228461  831680 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:25:07.228484  831680 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-583828 NodeName:addons-583828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:25:07.228634  831680 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-583828"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:25:07.228702  831680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:25:07.237934  831680 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:25:07.238017  831680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:25:07.246661  831680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1205 20:25:07.264254  831680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:25:07.281467  831680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1205 20:25:07.298986  831680 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 20:25:07.302842  831680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:25:07.313729  831680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:25:07.395631  831680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:25:07.408919  831680 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828 for IP: 192.168.49.2
	I1205 20:25:07.408951  831680 certs.go:194] generating shared ca certs ...
	I1205 20:25:07.408976  831680 certs.go:226] acquiring lock for ca certs: {Name:mke4ccebecd1ee68171cc800d6bc3abd7616bf4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:07.409166  831680 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-823623/.minikube/ca.key
	I1205 20:25:07.666515  831680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-823623/.minikube/ca.crt ...
	I1205 20:25:07.666557  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/ca.crt: {Name:mk4ca2ecc886e49fb3989918896448d71f14a1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:07.666785  831680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-823623/.minikube/ca.key ...
	I1205 20:25:07.666804  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/ca.key: {Name:mka1a40173cbae49266cc33991920a68d9bf7a4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:07.666921  831680 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-823623/.minikube/proxy-client-ca.key
	I1205 20:25:07.814752  831680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-823623/.minikube/proxy-client-ca.crt ...
	I1205 20:25:07.814788  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/proxy-client-ca.crt: {Name:mk32392bb439f48ba844502d0094f45eb93fca5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:07.814971  831680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-823623/.minikube/proxy-client-ca.key ...
	I1205 20:25:07.814989  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/proxy-client-ca.key: {Name:mk64c94fe082a3c8b3a5df5322d4c77c5d5d4b06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:07.815096  831680 certs.go:256] generating profile certs ...
	I1205 20:25:07.815177  831680 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.key
	I1205 20:25:07.815199  831680 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt with IP's: []
	I1205 20:25:07.887110  831680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt ...
	I1205 20:25:07.887153  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: {Name:mkecdb5815ddd7a55b990e08588fa22218865530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:07.887362  831680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.key ...
	I1205 20:25:07.887378  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.key: {Name:mk2cad970dc24ba84a4a459836b2a00bc1082777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:07.887479  831680 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.key.0713d799
	I1205 20:25:07.887505  831680 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.crt.0713d799 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1205 20:25:08.357138  831680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.crt.0713d799 ...
	I1205 20:25:08.357183  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.crt.0713d799: {Name:mk5fc20678d44d41d46ac5c2e916ba4d3d960aa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:08.357402  831680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.key.0713d799 ...
	I1205 20:25:08.357423  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.key.0713d799: {Name:mkd526552707e9e1af645510765abe85e1843157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:08.357531  831680 certs.go:381] copying /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.crt.0713d799 -> /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.crt
	I1205 20:25:08.357637  831680 certs.go:385] copying /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.key.0713d799 -> /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.key
	I1205 20:25:08.357718  831680 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.key
	I1205 20:25:08.357749  831680 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.crt with IP's: []
	I1205 20:25:08.491735  831680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.crt ...
	I1205 20:25:08.491777  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.crt: {Name:mk867dc812f11a9b557ceea6008c3c6754041c1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:08.491988  831680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.key ...
	I1205 20:25:08.492008  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.key: {Name:mkbfb5c2f2f80b7f1a012de232e9db115e5277b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:08.492227  831680 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:25:08.492286  831680 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:25:08.492327  831680 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:25:08.492375  831680 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-823623/.minikube/certs/key.pem (1679 bytes)
	I1205 20:25:08.493102  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:25:08.517914  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:25:08.542354  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:25:08.566777  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:25:08.590879  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 20:25:08.614464  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:25:08.637824  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:25:08.661533  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:25:08.685220  831680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-823623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:25:08.709327  831680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:25:08.726772  831680 ssh_runner.go:195] Run: openssl version
	I1205 20:25:08.732371  831680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:25:08.741798  831680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:25:08.745305  831680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:25:08.745365  831680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:25:08.752220  831680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:25:08.761740  831680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:25:08.765244  831680 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:25:08.765293  831680 kubeadm.go:392] StartCluster: {Name:addons-583828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-583828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:25:08.765382  831680 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:25:08.765432  831680 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:25:08.800348  831680 cri.go:89] found id: ""
	I1205 20:25:08.800421  831680 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:25:08.809313  831680 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:25:08.817974  831680 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1205 20:25:08.818040  831680 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:25:08.826466  831680 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:25:08.826491  831680 kubeadm.go:157] found existing configuration files:
	
	I1205 20:25:08.826549  831680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:25:08.835135  831680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:25:08.835190  831680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:25:08.843630  831680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:25:08.852236  831680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:25:08.852317  831680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:25:08.861946  831680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:25:08.871304  831680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:25:08.871377  831680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:25:08.880583  831680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:25:08.889443  831680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:25:08.889510  831680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:25:08.897486  831680 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 20:25:08.935422  831680 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:25:08.935533  831680 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:25:08.952942  831680 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1205 20:25:08.953064  831680 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1205 20:25:08.953117  831680 kubeadm.go:310] OS: Linux
	I1205 20:25:08.953192  831680 kubeadm.go:310] CGROUPS_CPU: enabled
	I1205 20:25:08.953247  831680 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1205 20:25:08.953289  831680 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1205 20:25:08.953335  831680 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1205 20:25:08.953378  831680 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1205 20:25:08.953458  831680 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1205 20:25:08.953528  831680 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1205 20:25:08.953607  831680 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1205 20:25:08.953661  831680 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1205 20:25:09.006919  831680 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:25:09.007055  831680 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:25:09.007189  831680 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:25:09.014151  831680 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:25:09.017218  831680 out.go:235]   - Generating certificates and keys ...
	I1205 20:25:09.017329  831680 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:25:09.017392  831680 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:25:09.223237  831680 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:25:09.361853  831680 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:25:09.465596  831680 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 20:25:09.540297  831680 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 20:25:09.675864  831680 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 20:25:09.676035  831680 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-583828 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 20:25:09.823949  831680 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 20:25:09.824094  831680 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-583828 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 20:25:10.017244  831680 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:25:10.150094  831680 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:25:10.265760  831680 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 20:25:10.265881  831680 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:25:10.506959  831680 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:25:10.613042  831680 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:25:10.773557  831680 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:25:10.858551  831680 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:25:10.944620  831680 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:25:10.945094  831680 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:25:10.947667  831680 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:25:10.949780  831680 out.go:235]   - Booting up control plane ...
	I1205 20:25:10.949918  831680 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:25:10.949992  831680 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:25:10.950556  831680 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:25:10.960097  831680 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:25:10.965544  831680 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:25:10.965610  831680 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:25:11.044081  831680 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:25:11.044265  831680 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:25:11.545686  831680 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.746842ms
	I1205 20:25:11.545775  831680 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:25:16.047211  831680 kubeadm.go:310] [api-check] The API server is healthy after 4.501501774s
	I1205 20:25:16.059306  831680 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:25:16.071792  831680 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:25:16.090776  831680 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:25:16.091066  831680 kubeadm.go:310] [mark-control-plane] Marking the node addons-583828 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:25:16.098837  831680 kubeadm.go:310] [bootstrap-token] Using token: evkn3l.jc6r2670y9dag6rg
	I1205 20:25:16.100508  831680 out.go:235]   - Configuring RBAC rules ...
	I1205 20:25:16.100623  831680 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:25:16.106685  831680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:25:16.113191  831680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:25:16.115934  831680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:25:16.118788  831680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:25:16.122418  831680 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:25:16.453935  831680 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:25:16.875309  831680 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:25:17.457277  831680 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:25:17.458489  831680 kubeadm.go:310] 
	I1205 20:25:17.458585  831680 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:25:17.458600  831680 kubeadm.go:310] 
	I1205 20:25:17.458708  831680 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:25:17.458746  831680 kubeadm.go:310] 
	I1205 20:25:17.458796  831680 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:25:17.458886  831680 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:25:17.458963  831680 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:25:17.458977  831680 kubeadm.go:310] 
	I1205 20:25:17.459073  831680 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:25:17.459110  831680 kubeadm.go:310] 
	I1205 20:25:17.459190  831680 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:25:17.459206  831680 kubeadm.go:310] 
	I1205 20:25:17.459295  831680 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:25:17.459426  831680 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:25:17.459485  831680 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:25:17.459496  831680 kubeadm.go:310] 
	I1205 20:25:17.459611  831680 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:25:17.459697  831680 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:25:17.459704  831680 kubeadm.go:310] 
	I1205 20:25:17.459793  831680 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token evkn3l.jc6r2670y9dag6rg \
	I1205 20:25:17.459915  831680 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a89d03b6be8118d89fe05341663c46b6deed4b956c25004c98e677338dc832f2 \
	I1205 20:25:17.459949  831680 kubeadm.go:310] 	--control-plane 
	I1205 20:25:17.459959  831680 kubeadm.go:310] 
	I1205 20:25:17.460073  831680 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:25:17.460082  831680 kubeadm.go:310] 
	I1205 20:25:17.460194  831680 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token evkn3l.jc6r2670y9dag6rg \
	I1205 20:25:17.460332  831680 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a89d03b6be8118d89fe05341663c46b6deed4b956c25004c98e677338dc832f2 
	I1205 20:25:17.462677  831680 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1205 20:25:17.462828  831680 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:25:17.462847  831680 cni.go:84] Creating CNI manager for ""
	I1205 20:25:17.462856  831680 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 20:25:17.464668  831680 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 20:25:17.466256  831680 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 20:25:17.470569  831680 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1205 20:25:17.470591  831680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 20:25:17.488720  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 20:25:17.690368  831680 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:25:17.690471  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:17.690509  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-583828 minikube.k8s.io/updated_at=2024_12_05T20_25_17_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=addons-583828 minikube.k8s.io/primary=true
	I1205 20:25:17.698076  831680 ops.go:34] apiserver oom_adj: -16
	I1205 20:25:17.759309  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:18.260223  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:18.759526  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:19.260215  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:19.760388  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:20.260175  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:20.760130  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:21.259724  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:21.759755  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:22.260206  831680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:25:22.329475  831680 kubeadm.go:1113] duration metric: took 4.639070164s to wait for elevateKubeSystemPrivileges
	I1205 20:25:22.329566  831680 kubeadm.go:394] duration metric: took 13.564276843s to StartCluster
	I1205 20:25:22.329599  831680 settings.go:142] acquiring lock: {Name:mk7ebf380bcfa7aba647ea9c26917767ebbabc59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:22.329747  831680 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-823623/kubeconfig
	I1205 20:25:22.330352  831680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-823623/kubeconfig: {Name:mked749022ef3c102f724c73a9801abef71a2d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:25:22.330608  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:25:22.330624  831680 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:25:22.330701  831680 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1205 20:25:22.330875  831680 addons.go:69] Setting yakd=true in profile "addons-583828"
	I1205 20:25:22.330885  831680 config.go:182] Loaded profile config "addons-583828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:25:22.330900  831680 addons.go:234] Setting addon yakd=true in "addons-583828"
	I1205 20:25:22.330900  831680 addons.go:69] Setting ingress=true in profile "addons-583828"
	I1205 20:25:22.330916  831680 addons.go:69] Setting default-storageclass=true in profile "addons-583828"
	I1205 20:25:22.330925  831680 addons.go:234] Setting addon ingress=true in "addons-583828"
	I1205 20:25:22.330921  831680 addons.go:69] Setting gcp-auth=true in profile "addons-583828"
	I1205 20:25:22.330939  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.330941  831680 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-583828"
	I1205 20:25:22.330965  831680 mustload.go:65] Loading cluster: addons-583828
	I1205 20:25:22.330934  831680 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-583828"
	I1205 20:25:22.331287  831680 addons.go:69] Setting ingress-dns=true in profile "addons-583828"
	I1205 20:25:22.331314  831680 addons.go:234] Setting addon ingress-dns=true in "addons-583828"
	I1205 20:25:22.331343  831680 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-583828"
	I1205 20:25:22.331357  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.331386  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.330901  831680 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-583828"
	I1205 20:25:22.331583  831680 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-583828"
	I1205 20:25:22.331610  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.331620  831680 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-583828"
	I1205 20:25:22.331633  831680 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-583828"
	I1205 20:25:22.331653  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.331938  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.332193  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.332287  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.332456  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.332863  831680 addons.go:69] Setting registry=true in profile "addons-583828"
	I1205 20:25:22.332884  831680 addons.go:234] Setting addon registry=true in "addons-583828"
	I1205 20:25:22.332938  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.333496  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.333838  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.333964  831680 addons.go:69] Setting storage-provisioner=true in profile "addons-583828"
	I1205 20:25:22.333988  831680 addons.go:234] Setting addon storage-provisioner=true in "addons-583828"
	I1205 20:25:22.334014  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.334275  831680 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-583828"
	I1205 20:25:22.334343  831680 out.go:177] * Verifying Kubernetes components...
	I1205 20:25:22.334506  831680 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-583828"
	I1205 20:25:22.334654  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.334724  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.334947  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.330887  831680 addons.go:69] Setting metrics-server=true in profile "addons-583828"
	I1205 20:25:22.336071  831680 addons.go:234] Setting addon metrics-server=true in "addons-583828"
	I1205 20:25:22.336116  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.336521  831680 addons.go:69] Setting volcano=true in profile "addons-583828"
	I1205 20:25:22.336591  831680 addons.go:234] Setting addon volcano=true in "addons-583828"
	I1205 20:25:22.336686  831680 addons.go:69] Setting volumesnapshots=true in profile "addons-583828"
	I1205 20:25:22.336722  831680 addons.go:234] Setting addon volumesnapshots=true in "addons-583828"
	I1205 20:25:22.336758  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.336819  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.337038  831680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:25:22.337964  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.334372  831680 addons.go:69] Setting inspektor-gadget=true in profile "addons-583828"
	I1205 20:25:22.340119  831680 addons.go:234] Setting addon inspektor-gadget=true in "addons-583828"
	I1205 20:25:22.340211  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.338915  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.330886  831680 addons.go:69] Setting cloud-spanner=true in profile "addons-583828"
	I1205 20:25:22.331550  831680 config.go:182] Loaded profile config "addons-583828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:25:22.338680  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.342609  831680 addons.go:234] Setting addon cloud-spanner=true in "addons-583828"
	I1205 20:25:22.343355  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.368005  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.372557  831680 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1205 20:25:22.373307  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.373887  831680 addons.go:234] Setting addon default-storageclass=true in "addons-583828"
	I1205 20:25:22.373937  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.373938  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.374409  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.374615  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.376559  831680 out.go:177]   - Using image docker.io/registry:2.8.3
	I1205 20:25:22.376679  831680 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1205 20:25:22.377599  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.377905  831680 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 20:25:22.377928  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1205 20:25:22.377986  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.378256  831680 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 20:25:22.378278  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1205 20:25:22.378332  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.392260  831680 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1205 20:25:22.393699  831680 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 20:25:22.393725  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 20:25:22.393799  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.417653  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.436725  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.440268  831680 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:25:22.440293  831680 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:25:22.440354  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.442811  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 20:25:22.445394  831680 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:25:22.447327  831680 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:25:22.447351  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:25:22.447412  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.447606  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 20:25:22.447715  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 20:25:22.449700  831680 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 20:25:22.449723  831680 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 20:25:22.449787  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.449956  831680 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1205 20:25:22.451553  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 20:25:22.451619  831680 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:25:22.451629  831680 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:25:22.451677  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.451885  831680 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1205 20:25:22.453094  831680 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1205 20:25:22.453112  831680 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1205 20:25:22.453165  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.454720  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.454971  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 20:25:22.457355  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W1205 20:25:22.458718  831680 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1205 20:25:22.461080  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 20:25:22.462312  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 20:25:22.463565  831680 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 20:25:22.464658  831680 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 20:25:22.464681  831680 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 20:25:22.464744  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.464791  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.467002  831680 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-583828"
	I1205 20:25:22.467090  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:22.467092  831680 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1205 20:25:22.467593  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:22.469414  831680 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 20:25:22.470513  831680 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 20:25:22.471659  831680 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1205 20:25:22.471923  831680 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 20:25:22.471951  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1205 20:25:22.472012  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.473503  831680 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1205 20:25:22.473523  831680 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1205 20:25:22.473580  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.474734  831680 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1205 20:25:22.475986  831680 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1205 20:25:22.476006  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 20:25:22.476062  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.481755  831680 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1205 20:25:22.483577  831680 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 20:25:22.483598  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1205 20:25:22.483653  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.485260  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.486800  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.492190  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.492402  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.494743  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.501962  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.525126  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.525533  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.527448  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.532031  831680 out.go:177]   - Using image docker.io/busybox:stable
	I1205 20:25:22.533051  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	W1205 20:25:22.533414  831680 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 20:25:22.533462  831680 retry.go:31] will retry after 340.327697ms: ssh: handshake failed: EOF
	W1205 20:25:22.534116  831680 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1205 20:25:22.534135  831680 retry.go:31] will retry after 151.300109ms: ssh: handshake failed: EOF
	I1205 20:25:22.540202  831680 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 20:25:22.544977  831680 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 20:25:22.545002  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 20:25:22.545076  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:22.562214  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:22.724623  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:25:22.739461  831680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:25:22.814190  831680 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 20:25:22.814221  831680 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 20:25:22.928623  831680 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 20:25:22.928711  831680 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 20:25:22.930051  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 20:25:22.933216  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:25:23.010835  831680 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1205 20:25:23.010882  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1205 20:25:23.013096  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 20:25:23.018375  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 20:25:23.024774  831680 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 20:25:23.024866  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 20:25:23.025931  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 20:25:23.032078  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:25:23.110859  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 20:25:23.128522  831680 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:25:23.128617  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 20:25:23.130606  831680 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 20:25:23.130693  831680 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 20:25:23.211800  831680 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1205 20:25:23.211912  831680 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1205 20:25:23.220578  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1205 20:25:23.225024  831680 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 20:25:23.225053  831680 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 20:25:23.310870  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 20:25:23.411478  831680 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:25:23.411574  831680 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:25:23.511552  831680 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 20:25:23.511657  831680 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 20:25:23.525408  831680 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 20:25:23.525507  831680 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 20:25:23.623090  831680 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1205 20:25:23.623194  831680 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1205 20:25:23.629856  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 20:25:23.822790  831680 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:25:23.822890  831680 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:25:23.918481  831680 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 20:25:23.918574  831680 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 20:25:24.010673  831680 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 20:25:24.010776  831680 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 20:25:24.016755  831680 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1205 20:25:24.016845  831680 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1205 20:25:24.213089  831680 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1205 20:25:24.213116  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1205 20:25:24.309612  831680 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 20:25:24.309724  831680 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 20:25:24.312784  831680 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 20:25:24.312865  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 20:25:24.411598  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:25:24.428582  831680 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.703907598s)
	I1205 20:25:24.428805  831680 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1205 20:25:24.428729  831680 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.6892292s)
	I1205 20:25:24.430063  831680 node_ready.go:35] waiting up to 6m0s for node "addons-583828" to be "Ready" ...
	I1205 20:25:24.523373  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 20:25:24.613162  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1205 20:25:24.722691  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.792481903s)
	I1205 20:25:25.111937  831680 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 20:25:25.112035  831680 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 20:25:25.427796  831680 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-583828" context rescaled to 1 replicas
	I1205 20:25:25.622175  831680 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 20:25:25.622209  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 20:25:25.911032  831680 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 20:25:25.911067  831680 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 20:25:26.031130  831680 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 20:25:26.031171  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 20:25:26.219631  831680 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 20:25:26.219659  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 20:25:26.410549  831680 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 20:25:26.410585  831680 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 20:25:26.518211  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 20:25:26.829982  831680 node_ready.go:53] node "addons-583828" has status "Ready":"False"
	I1205 20:25:27.331347  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.398084856s)
	I1205 20:25:27.331427  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.318304401s)
	I1205 20:25:27.331474  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.313012721s)
	I1205 20:25:27.622940  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.596922141s)
	I1205 20:25:27.623228  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.59111365s)
	W1205 20:25:27.929470  831680 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1205 20:25:29.019506  831680 node_ready.go:53] node "addons-583828" has status "Ready":"False"
	I1205 20:25:29.135391  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.024401389s)
	I1205 20:25:29.135459  831680 addons.go:475] Verifying addon ingress=true in "addons-583828"
	I1205 20:25:29.135474  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.914860228s)
	I1205 20:25:29.135561  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.824597205s)
	I1205 20:25:29.135581  831680 addons.go:475] Verifying addon registry=true in "addons-583828"
	I1205 20:25:29.135784  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.505880051s)
	I1205 20:25:29.135897  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.724207446s)
	I1205 20:25:29.135924  831680 addons.go:475] Verifying addon metrics-server=true in "addons-583828"
	I1205 20:25:29.137893  831680 out.go:177] * Verifying ingress addon...
	I1205 20:25:29.137908  831680 out.go:177] * Verifying registry addon...
	I1205 20:25:29.139928  831680 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 20:25:29.139991  831680 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 20:25:29.216130  831680 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 20:25:29.216165  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:29.216389  831680 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 20:25:29.216404  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:29.644151  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:29.644750  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:29.710911  831680 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 20:25:29.710997  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:29.739321  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:30.030231  831680 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 20:25:30.037992  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.424734986s)
	I1205 20:25:30.037911  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.514424126s)
	W1205 20:25:30.038366  831680 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 20:25:30.038428  831680 retry.go:31] will retry after 174.297587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 20:25:30.039846  831680 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-583828 service yakd-dashboard -n yakd-dashboard
	
	I1205 20:25:30.114006  831680 addons.go:234] Setting addon gcp-auth=true in "addons-583828"
	I1205 20:25:30.114141  831680 host.go:66] Checking if "addons-583828" exists ...
	I1205 20:25:30.114735  831680 cli_runner.go:164] Run: docker container inspect addons-583828 --format={{.State.Status}}
	I1205 20:25:30.143059  831680 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 20:25:30.143126  831680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583828
	I1205 20:25:30.147500  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:30.162360  831680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/addons-583828/id_rsa Username:docker}
	I1205 20:25:30.212957  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 20:25:30.248317  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:30.643673  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:30.644338  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:30.832541  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.31420902s)
	I1205 20:25:30.832599  831680 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-583828"
	I1205 20:25:30.834999  831680 out.go:177] * Verifying csi-hostpath-driver addon...
	I1205 20:25:30.837265  831680 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 20:25:30.840556  831680 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 20:25:30.840577  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:31.144051  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:31.144561  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:31.341266  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:31.433896  831680 node_ready.go:53] node "addons-583828" has status "Ready":"False"
	I1205 20:25:31.643983  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:31.644409  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:31.841441  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:32.144316  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:32.144738  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:32.341988  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:32.644171  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:32.644627  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:32.841595  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:33.145141  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:33.145738  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:33.157759  831680 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.944744053s)
	I1205 20:25:33.157852  831680 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.014753329s)
	I1205 20:25:33.159525  831680 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 20:25:33.160909  831680 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1205 20:25:33.162076  831680 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 20:25:33.162094  831680 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 20:25:33.179534  831680 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 20:25:33.179563  831680 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 20:25:33.196523  831680 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 20:25:33.196552  831680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1205 20:25:33.214223  831680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 20:25:33.341016  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:33.434009  831680 node_ready.go:53] node "addons-583828" has status "Ready":"False"
	I1205 20:25:33.559470  831680 addons.go:475] Verifying addon gcp-auth=true in "addons-583828"
	I1205 20:25:33.562054  831680 out.go:177] * Verifying gcp-auth addon...
	I1205 20:25:33.564163  831680 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 20:25:33.566974  831680 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 20:25:33.566995  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:33.643912  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:33.644292  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:33.841765  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:34.067430  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:34.143565  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:34.144149  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:34.340944  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:34.567805  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:34.643865  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:34.644462  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:34.841350  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:35.068239  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:35.143261  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:35.143983  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:35.341731  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:35.567410  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:35.643835  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:35.644285  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:35.841448  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:35.934307  831680 node_ready.go:53] node "addons-583828" has status "Ready":"False"
	I1205 20:25:36.068196  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:36.143504  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:36.143861  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:36.341163  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:36.567984  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:36.644358  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:36.644679  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:36.841432  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:37.067743  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:37.143800  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:37.144281  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:37.340979  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:37.568316  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:37.643165  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:37.643811  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:37.841792  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:38.067399  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:38.143309  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:38.143838  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:38.341368  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:38.434424  831680 node_ready.go:53] node "addons-583828" has status "Ready":"False"
	I1205 20:25:38.567404  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:38.643153  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:38.643631  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:38.841510  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:39.067651  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:39.143773  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:39.144045  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:39.340916  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:39.567486  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:39.643470  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:39.643890  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:39.841554  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:40.068083  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:40.143952  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:40.144365  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:40.341324  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:40.567939  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:40.644102  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:40.644540  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:40.841696  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:40.933420  831680 node_ready.go:53] node "addons-583828" has status "Ready":"False"
	I1205 20:25:41.067279  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:41.143350  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:41.143742  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:41.340599  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:41.625269  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:41.719806  831680 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 20:25:41.719901  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:41.719929  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:41.841368  831680 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 20:25:41.841402  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:41.933359  831680 node_ready.go:49] node "addons-583828" has status "Ready":"True"
	I1205 20:25:41.933390  831680 node_ready.go:38] duration metric: took 17.503244864s for node "addons-583828" to be "Ready" ...
	I1205 20:25:41.933403  831680 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:25:41.946120  831680 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace to be "Ready" ...
	I1205 20:25:42.113859  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:42.214292  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:42.214957  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:42.343363  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:42.568274  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:42.669229  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:42.669803  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:42.842270  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:43.068666  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:43.144337  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:43.144394  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:43.342422  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:43.567639  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:43.668747  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:43.668889  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:43.842238  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:43.952481  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:25:44.112559  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:44.145621  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:44.145881  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:44.342782  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:44.568502  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:44.669525  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:44.669771  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:44.842200  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:45.068191  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:45.169353  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:45.169636  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:45.342419  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:45.568547  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:45.643973  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:45.644207  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:45.842113  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:46.068530  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:46.144106  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:46.144306  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:46.342024  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:46.451530  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:25:46.567801  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:46.644236  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:46.644753  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:46.842867  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:47.067982  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:47.145513  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:47.147113  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:47.341945  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:47.568131  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:47.644336  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:47.644551  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:47.842439  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:48.068933  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:48.144128  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:48.144300  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:48.341751  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:48.452409  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:25:48.567712  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:48.643942  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:48.644423  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:48.842641  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:49.068328  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:49.144708  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:49.145082  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:49.342497  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:49.568332  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:49.644557  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:49.645178  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:49.842003  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:50.068346  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:50.143835  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:50.143936  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:50.342803  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:50.452530  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:25:50.567920  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:50.644370  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:50.644767  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:50.844853  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:51.068594  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:51.144223  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:51.144490  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:51.342198  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:51.568389  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:51.645164  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:51.645264  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:51.842147  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:52.068323  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:52.143360  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:52.143536  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:52.342145  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:52.567677  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:52.643925  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:52.644305  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:52.842466  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:52.953182  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:25:53.068922  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:53.144459  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:53.144841  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:53.342023  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:53.612428  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:53.645320  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:53.645589  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:53.843419  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:54.068851  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:54.145383  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:54.145713  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:54.342403  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:54.567896  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:54.644225  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:54.644428  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:54.842469  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:55.067854  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:55.143979  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:55.144603  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:55.342346  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:55.452996  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:25:55.568521  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:55.644121  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:55.644304  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:55.842431  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:56.068595  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:56.144838  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:56.145049  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:56.341725  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:56.612171  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:56.644390  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:56.644779  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:56.842880  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:57.112331  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:57.144916  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:57.144948  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:57.342823  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:57.568554  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:57.647051  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:57.647057  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:57.842689  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:57.952122  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:25:58.068068  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:58.144638  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:58.144954  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:58.344779  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:58.568740  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:58.644474  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:58.644878  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:58.842107  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:59.068382  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:59.143684  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:59.143904  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:59.342517  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:25:59.627532  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:25:59.714723  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:25:59.715964  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:25:59.915191  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:00.016328  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:00.113137  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:00.144522  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:00.145566  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:00.341816  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:00.612428  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:00.712712  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:00.713134  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:00.842415  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:01.068099  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:01.144475  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:01.145381  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:01.341484  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:01.568110  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:01.644751  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:01.645233  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:01.850808  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:02.067732  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:02.144459  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:02.144756  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:02.342467  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:02.452353  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:02.568746  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:02.643912  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:02.644253  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:02.843614  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:03.068637  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:03.144146  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:03.144327  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:03.341648  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:03.568126  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:03.644506  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:03.645092  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:03.842469  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:04.067663  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:04.143707  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:04.143983  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:04.342127  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:04.453269  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:04.568502  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:04.644035  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:04.644827  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:04.842905  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:05.067680  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:05.143826  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:05.144538  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:05.342476  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:05.568312  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:05.643562  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:05.643908  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:05.842766  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:06.068822  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:06.144139  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:06.144406  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:06.342167  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:06.568985  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:06.669581  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:06.669704  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:06.842543  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:06.952319  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:07.068613  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:07.143753  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:07.143954  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:07.341547  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:07.567785  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:07.644108  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:07.644396  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:07.842212  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:08.068918  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:08.169777  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:08.169950  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:08.342807  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:08.567588  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:08.644257  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:08.644390  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:08.842435  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:08.952872  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:09.113717  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:09.145447  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:09.145705  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:09.414310  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:09.618528  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:09.714191  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:09.717180  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:09.915006  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:10.113296  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:10.215179  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:10.217891  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:10.415255  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:10.612209  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:10.714127  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:10.714410  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:10.912738  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:11.012444  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:11.112262  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:11.144981  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:11.145514  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:11.342806  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:11.568185  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:11.644795  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:11.645323  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:11.843612  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:12.068257  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:12.148681  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:12.248447  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:12.342943  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:12.567983  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:12.644320  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:12.644462  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:12.842836  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:13.068417  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:13.143631  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:13.144862  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:13.342288  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:13.452231  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:13.568370  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:13.643837  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:13.644005  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:13.843178  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:14.068400  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:14.145141  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:14.146419  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:14.342156  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:14.568443  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:14.644069  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:14.644459  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:14.841659  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:15.068746  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:15.144602  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:15.145204  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:15.342281  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:15.567793  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:15.644371  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:15.644810  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:15.841868  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:15.953053  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:16.068332  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:16.144694  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:16.144929  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:16.343194  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:16.568614  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:16.643890  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:16.644292  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:16.842523  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:17.068778  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:17.144511  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:17.145098  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:17.342373  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:17.568701  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:17.646499  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:17.646753  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:17.842318  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:18.068288  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:18.144844  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:18.145410  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:18.342208  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:18.452073  831680 pod_ready.go:103] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:18.568173  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:18.644623  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:18.644774  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:18.842673  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:19.068838  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:19.169834  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:19.169992  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:19.341704  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:19.568409  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:19.644133  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:19.644451  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:19.842842  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:20.111630  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:20.144418  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:20.144794  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:20.342324  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:20.455555  831680 pod_ready.go:93] pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:20.455586  831680 pod_ready.go:82] duration metric: took 38.509422639s for pod "amd-gpu-device-plugin-rc729" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.455610  831680 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dkkxw" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.463291  831680 pod_ready.go:93] pod "coredns-7c65d6cfc9-dkkxw" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:20.463321  831680 pod_ready.go:82] duration metric: took 7.702515ms for pod "coredns-7c65d6cfc9-dkkxw" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.463356  831680 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.468461  831680 pod_ready.go:93] pod "etcd-addons-583828" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:20.468483  831680 pod_ready.go:82] duration metric: took 5.119928ms for pod "etcd-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.468494  831680 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.512489  831680 pod_ready.go:93] pod "kube-apiserver-addons-583828" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:20.512517  831680 pod_ready.go:82] duration metric: took 44.016979ms for pod "kube-apiserver-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.512528  831680 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.518277  831680 pod_ready.go:93] pod "kube-controller-manager-addons-583828" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:20.518299  831680 pod_ready.go:82] duration metric: took 5.764644ms for pod "kube-controller-manager-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.518311  831680 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7b2sh" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.568020  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:20.644874  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:20.645213  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:20.842543  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:20.850610  831680 pod_ready.go:93] pod "kube-proxy-7b2sh" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:20.850639  831680 pod_ready.go:82] duration metric: took 332.319507ms for pod "kube-proxy-7b2sh" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:20.850652  831680 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:21.067950  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:21.168863  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:21.211766  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:21.249983  831680 pod_ready.go:93] pod "kube-scheduler-addons-583828" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:21.250009  831680 pod_ready.go:82] duration metric: took 399.349463ms for pod "kube-scheduler-addons-583828" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:21.250020  831680 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:21.342354  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:21.567655  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:21.643955  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:21.644547  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:21.842853  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:22.067419  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:22.146612  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:26:22.146898  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:22.342865  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:22.612465  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:22.711823  831680 kapi.go:107] duration metric: took 53.571886889s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 20:26:22.712583  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:22.913143  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:23.111690  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:23.144284  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:23.256276  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:23.342607  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:23.567674  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:23.644752  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:23.842333  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:24.068276  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:24.144633  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:24.343406  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:24.567636  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:24.644225  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:24.842634  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:25.068510  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:25.144079  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:25.260254  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:25.341797  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:25.612530  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:25.645554  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:25.842776  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:26.067849  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:26.168938  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:26.342582  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:26.567368  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:26.645036  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:26.843227  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:27.068558  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:27.144790  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:27.343212  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:27.568978  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:27.670312  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:27.755911  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:27.841365  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:28.114464  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:26:28.213110  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:28.415357  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:28.631516  831680 kapi.go:107] duration metric: took 55.067345784s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 20:26:28.633464  831680 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-583828 cluster.
	I1205 20:26:28.710886  831680 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 20:26:28.712350  831680 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 20:26:28.721500  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:28.843087  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:29.213474  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:29.412778  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:29.713886  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:29.816626  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:29.915291  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:30.144565  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:30.342700  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:30.645065  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:30.841860  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:31.144835  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:31.342617  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:31.644149  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:31.845324  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:32.144312  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:32.256957  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:32.343040  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:32.644280  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:32.842223  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:33.144804  831680 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:26:33.342685  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:33.646249  831680 kapi.go:107] duration metric: took 1m4.506247962s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 20:26:33.841825  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:34.342284  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:34.756232  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:34.842840  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:35.342257  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:35.904425  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:36.342832  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:36.842595  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:37.256168  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:37.342423  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:37.842295  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:38.342737  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:38.841837  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:39.256600  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:39.342153  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:39.842972  831680 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:26:40.342781  831680 kapi.go:107] duration metric: took 1m9.505513818s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 20:26:40.344653  831680 out.go:177] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, inspektor-gadget, cloud-spanner, metrics-server, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1205 20:26:40.346074  831680 addons.go:510] duration metric: took 1m18.01538325s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner nvidia-device-plugin ingress-dns storage-provisioner-rancher inspektor-gadget cloud-spanner metrics-server yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1205 20:26:41.757060  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:44.256439  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:46.813376  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:49.255762  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:51.756546  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:54.256613  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:56.256669  831680 pod_ready.go:103] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"False"
	I1205 20:26:58.757170  831680 pod_ready.go:93] pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:58.757270  831680 pod_ready.go:82] duration metric: took 37.507238319s for pod "metrics-server-84c5f94fbc-lc9cp" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:58.757298  831680 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5zspz" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:58.767046  831680 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-5zspz" in "kube-system" namespace has status "Ready":"True"
	I1205 20:26:58.767079  831680 pod_ready.go:82] duration metric: took 9.767421ms for pod "nvidia-device-plugin-daemonset-5zspz" in "kube-system" namespace to be "Ready" ...
	I1205 20:26:58.767108  831680 pod_ready.go:39] duration metric: took 1m16.833690429s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:26:58.767135  831680 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:26:58.767180  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:26:58.767246  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:26:58.803654  831680 cri.go:89] found id: "98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889"
	I1205 20:26:58.803684  831680 cri.go:89] found id: ""
	I1205 20:26:58.803693  831680 logs.go:282] 1 containers: [98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889]
	I1205 20:26:58.803744  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:26:58.807187  831680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:26:58.807275  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:26:58.842029  831680 cri.go:89] found id: "feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201"
	I1205 20:26:58.842051  831680 cri.go:89] found id: ""
	I1205 20:26:58.842060  831680 logs.go:282] 1 containers: [feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201]
	I1205 20:26:58.842106  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:26:58.845891  831680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:26:58.845954  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:26:58.881333  831680 cri.go:89] found id: "978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629"
	I1205 20:26:58.881361  831680 cri.go:89] found id: ""
	I1205 20:26:58.881372  831680 logs.go:282] 1 containers: [978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629]
	I1205 20:26:58.881423  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:26:58.885224  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:26:58.885298  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:26:58.920628  831680 cri.go:89] found id: "c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01"
	I1205 20:26:58.920649  831680 cri.go:89] found id: ""
	I1205 20:26:58.920657  831680 logs.go:282] 1 containers: [c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01]
	I1205 20:26:58.920703  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:26:58.924275  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:26:58.924343  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:26:58.958807  831680 cri.go:89] found id: "42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8"
	I1205 20:26:58.958828  831680 cri.go:89] found id: ""
	I1205 20:26:58.958836  831680 logs.go:282] 1 containers: [42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8]
	I1205 20:26:58.958881  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:26:58.962504  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:26:58.962576  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:26:58.998901  831680 cri.go:89] found id: "554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578"
	I1205 20:26:58.998930  831680 cri.go:89] found id: ""
	I1205 20:26:58.998939  831680 logs.go:282] 1 containers: [554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578]
	I1205 20:26:58.998997  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:26:59.002419  831680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:26:59.002479  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:26:59.036667  831680 cri.go:89] found id: "ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c"
	I1205 20:26:59.036697  831680 cri.go:89] found id: ""
	I1205 20:26:59.036708  831680 logs.go:282] 1 containers: [ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c]
	I1205 20:26:59.036752  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:26:59.040283  831680 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:26:59.040315  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:26:59.120466  831680 logs.go:123] Gathering logs for container status ...
	I1205 20:26:59.120517  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:26:59.164079  831680 logs.go:123] Gathering logs for dmesg ...
	I1205 20:26:59.164112  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:26:59.191118  831680 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:26:59.191160  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:26:59.296797  831680 logs.go:123] Gathering logs for etcd [feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201] ...
	I1205 20:26:59.296833  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201"
	I1205 20:26:59.355024  831680 logs.go:123] Gathering logs for coredns [978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629] ...
	I1205 20:26:59.355082  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629"
	I1205 20:26:59.392871  831680 logs.go:123] Gathering logs for kube-proxy [42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8] ...
	I1205 20:26:59.392925  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8"
	I1205 20:26:59.430387  831680 logs.go:123] Gathering logs for kubelet ...
	I1205 20:26:59.430421  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:26:59.524394  831680 logs.go:123] Gathering logs for kube-apiserver [98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889] ...
	I1205 20:26:59.524435  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889"
	I1205 20:26:59.571513  831680 logs.go:123] Gathering logs for kube-scheduler [c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01] ...
	I1205 20:26:59.571549  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01"
	I1205 20:26:59.611369  831680 logs.go:123] Gathering logs for kube-controller-manager [554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578] ...
	I1205 20:26:59.611406  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578"
	I1205 20:26:59.669885  831680 logs.go:123] Gathering logs for kindnet [ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c] ...
	I1205 20:26:59.669929  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c"
	I1205 20:27:02.206306  831680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:27:02.221639  831680 api_server.go:72] duration metric: took 1m39.890978267s to wait for apiserver process to appear ...
	I1205 20:27:02.221673  831680 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:27:02.221727  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:27:02.221782  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:27:02.258378  831680 cri.go:89] found id: "98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889"
	I1205 20:27:02.258408  831680 cri.go:89] found id: ""
	I1205 20:27:02.258416  831680 logs.go:282] 1 containers: [98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889]
	I1205 20:27:02.258464  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:02.262228  831680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:27:02.262301  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:27:02.297341  831680 cri.go:89] found id: "feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201"
	I1205 20:27:02.297377  831680 cri.go:89] found id: ""
	I1205 20:27:02.297388  831680 logs.go:282] 1 containers: [feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201]
	I1205 20:27:02.297443  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:02.301020  831680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:27:02.301087  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:27:02.337844  831680 cri.go:89] found id: "978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629"
	I1205 20:27:02.337890  831680 cri.go:89] found id: ""
	I1205 20:27:02.337901  831680 logs.go:282] 1 containers: [978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629]
	I1205 20:27:02.337959  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:02.341911  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:27:02.342003  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:27:02.377648  831680 cri.go:89] found id: "c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01"
	I1205 20:27:02.377670  831680 cri.go:89] found id: ""
	I1205 20:27:02.377678  831680 logs.go:282] 1 containers: [c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01]
	I1205 20:27:02.377723  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:02.381391  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:27:02.381465  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:27:02.417806  831680 cri.go:89] found id: "42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8"
	I1205 20:27:02.417833  831680 cri.go:89] found id: ""
	I1205 20:27:02.417845  831680 logs.go:282] 1 containers: [42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8]
	I1205 20:27:02.417893  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:02.421889  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:27:02.421962  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:27:02.458138  831680 cri.go:89] found id: "554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578"
	I1205 20:27:02.458166  831680 cri.go:89] found id: ""
	I1205 20:27:02.458177  831680 logs.go:282] 1 containers: [554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578]
	I1205 20:27:02.458235  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:02.462096  831680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:27:02.462154  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:27:02.497047  831680 cri.go:89] found id: "ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c"
	I1205 20:27:02.497075  831680 cri.go:89] found id: ""
	I1205 20:27:02.497083  831680 logs.go:282] 1 containers: [ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c]
	I1205 20:27:02.497129  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:02.500737  831680 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:27:02.500764  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:27:02.601676  831680 logs.go:123] Gathering logs for kube-apiserver [98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889] ...
	I1205 20:27:02.601706  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889"
	I1205 20:27:02.648766  831680 logs.go:123] Gathering logs for etcd [feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201] ...
	I1205 20:27:02.648806  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201"
	I1205 20:27:02.701070  831680 logs.go:123] Gathering logs for coredns [978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629] ...
	I1205 20:27:02.701117  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629"
	I1205 20:27:02.739322  831680 logs.go:123] Gathering logs for kube-scheduler [c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01] ...
	I1205 20:27:02.739373  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01"
	I1205 20:27:02.781198  831680 logs.go:123] Gathering logs for kube-controller-manager [554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578] ...
	I1205 20:27:02.781234  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578"
	I1205 20:27:02.839513  831680 logs.go:123] Gathering logs for kubelet ...
	I1205 20:27:02.839552  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:27:02.927070  831680 logs.go:123] Gathering logs for dmesg ...
	I1205 20:27:02.927112  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:27:02.955779  831680 logs.go:123] Gathering logs for kube-proxy [42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8] ...
	I1205 20:27:02.955818  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8"
	I1205 20:27:02.990872  831680 logs.go:123] Gathering logs for kindnet [ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c] ...
	I1205 20:27:02.990913  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c"
	I1205 20:27:03.026581  831680 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:27:03.026611  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:27:03.102087  831680 logs.go:123] Gathering logs for container status ...
	I1205 20:27:03.102129  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:27:05.648657  831680 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 20:27:05.652650  831680 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1205 20:27:05.653582  831680 api_server.go:141] control plane version: v1.31.2
	I1205 20:27:05.653607  831680 api_server.go:131] duration metric: took 3.431927171s to wait for apiserver health ...
	I1205 20:27:05.653622  831680 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:27:05.653646  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:27:05.653697  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:27:05.689375  831680 cri.go:89] found id: "98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889"
	I1205 20:27:05.689403  831680 cri.go:89] found id: ""
	I1205 20:27:05.689415  831680 logs.go:282] 1 containers: [98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889]
	I1205 20:27:05.689468  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:05.693022  831680 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:27:05.693107  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:27:05.728582  831680 cri.go:89] found id: "feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201"
	I1205 20:27:05.728612  831680 cri.go:89] found id: ""
	I1205 20:27:05.728623  831680 logs.go:282] 1 containers: [feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201]
	I1205 20:27:05.728695  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:05.732551  831680 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:27:05.732634  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:27:05.768297  831680 cri.go:89] found id: "978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629"
	I1205 20:27:05.768324  831680 cri.go:89] found id: ""
	I1205 20:27:05.768332  831680 logs.go:282] 1 containers: [978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629]
	I1205 20:27:05.768391  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:05.772092  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:27:05.772155  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:27:05.807176  831680 cri.go:89] found id: "c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01"
	I1205 20:27:05.807199  831680 cri.go:89] found id: ""
	I1205 20:27:05.807206  831680 logs.go:282] 1 containers: [c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01]
	I1205 20:27:05.807261  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:05.810977  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:27:05.811040  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:27:05.848206  831680 cri.go:89] found id: "42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8"
	I1205 20:27:05.848244  831680 cri.go:89] found id: ""
	I1205 20:27:05.848257  831680 logs.go:282] 1 containers: [42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8]
	I1205 20:27:05.848309  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:05.852151  831680 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:27:05.852232  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:27:05.890010  831680 cri.go:89] found id: "554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578"
	I1205 20:27:05.890035  831680 cri.go:89] found id: ""
	I1205 20:27:05.890043  831680 logs.go:282] 1 containers: [554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578]
	I1205 20:27:05.890100  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:05.893706  831680 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:27:05.893763  831680 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:27:05.928421  831680 cri.go:89] found id: "ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c"
	I1205 20:27:05.928449  831680 cri.go:89] found id: ""
	I1205 20:27:05.928458  831680 logs.go:282] 1 containers: [ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c]
	I1205 20:27:05.928515  831680 ssh_runner.go:195] Run: which crictl
	I1205 20:27:05.932122  831680 logs.go:123] Gathering logs for kubelet ...
	I1205 20:27:05.932148  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:27:06.019265  831680 logs.go:123] Gathering logs for etcd [feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201] ...
	I1205 20:27:06.019312  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201"
	I1205 20:27:06.072058  831680 logs.go:123] Gathering logs for coredns [978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629] ...
	I1205 20:27:06.072107  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629"
	I1205 20:27:06.110337  831680 logs.go:123] Gathering logs for kube-proxy [42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8] ...
	I1205 20:27:06.110372  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8"
	I1205 20:27:06.145985  831680 logs.go:123] Gathering logs for kube-controller-manager [554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578] ...
	I1205 20:27:06.146020  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578"
	I1205 20:27:06.205238  831680 logs.go:123] Gathering logs for kindnet [ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c] ...
	I1205 20:27:06.205281  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c"
	I1205 20:27:06.241473  831680 logs.go:123] Gathering logs for dmesg ...
	I1205 20:27:06.241502  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:27:06.269057  831680 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:27:06.269099  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:27:06.378903  831680 logs.go:123] Gathering logs for kube-apiserver [98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889] ...
	I1205 20:27:06.378938  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889"
	I1205 20:27:06.426943  831680 logs.go:123] Gathering logs for kube-scheduler [c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01] ...
	I1205 20:27:06.426985  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01"
	I1205 20:27:06.469419  831680 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:27:06.469465  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:27:06.554104  831680 logs.go:123] Gathering logs for container status ...
	I1205 20:27:06.554155  831680 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:27:09.109489  831680 system_pods.go:59] 19 kube-system pods found
	I1205 20:27:09.109536  831680 system_pods.go:61] "amd-gpu-device-plugin-rc729" [c2c85683-d2fe-4fe5-bee0-cb72305ef72e] Running
	I1205 20:27:09.109543  831680 system_pods.go:61] "coredns-7c65d6cfc9-dkkxw" [ab688262-31c0-4d73-84f9-79988d76bb32] Running
	I1205 20:27:09.109547  831680 system_pods.go:61] "csi-hostpath-attacher-0" [5d14e0fd-b0e0-467f-b1cb-d8385382d57e] Running
	I1205 20:27:09.109550  831680 system_pods.go:61] "csi-hostpath-resizer-0" [e9117e43-09b3-4a31-8336-6610a83137be] Running
	I1205 20:27:09.109556  831680 system_pods.go:61] "csi-hostpathplugin-xjjqm" [e76e7df4-19a0-4da7-959e-77806daa2ad0] Running
	I1205 20:27:09.109561  831680 system_pods.go:61] "etcd-addons-583828" [0e09f289-f6cc-4d00-8613-be519b92139f] Running
	I1205 20:27:09.109565  831680 system_pods.go:61] "kindnet-dfgk2" [853b95db-fec0-426a-809a-05c807358dfa] Running
	I1205 20:27:09.109568  831680 system_pods.go:61] "kube-apiserver-addons-583828" [3efa3769-d977-4896-922f-f11b696b2661] Running
	I1205 20:27:09.109571  831680 system_pods.go:61] "kube-controller-manager-addons-583828" [c763df0e-ccca-4c39-bf2f-a7e3393f34db] Running
	I1205 20:27:09.109575  831680 system_pods.go:61] "kube-ingress-dns-minikube" [7fdb2265-3f78-4fd7-9f95-2ee7d4361c8c] Running
	I1205 20:27:09.109578  831680 system_pods.go:61] "kube-proxy-7b2sh" [80fbfc76-9441-46fa-b36f-0b4c43010444] Running
	I1205 20:27:09.109581  831680 system_pods.go:61] "kube-scheduler-addons-583828" [5c1ad2e6-957a-4098-b3b6-efe050ca5709] Running
	I1205 20:27:09.109584  831680 system_pods.go:61] "metrics-server-84c5f94fbc-lc9cp" [30aaf999-d2c9-45af-b24e-e74e1c57353b] Running
	I1205 20:27:09.109588  831680 system_pods.go:61] "nvidia-device-plugin-daemonset-5zspz" [640da076-aa23-44e4-8e0d-03530daed62f] Running
	I1205 20:27:09.109591  831680 system_pods.go:61] "registry-66c9cd494c-z49gz" [fe21bb58-8336-4e34-b5f4-ad786e9a2fac] Running
	I1205 20:27:09.109594  831680 system_pods.go:61] "registry-proxy-fzjzn" [6dd2b29c-df34-4531-be7e-32c564376c8d] Running
	I1205 20:27:09.109597  831680 system_pods.go:61] "snapshot-controller-56fcc65765-9xqwt" [56140c8a-3229-4005-b2ff-25c148dd6e76] Running
	I1205 20:27:09.109600  831680 system_pods.go:61] "snapshot-controller-56fcc65765-wwprs" [cc942f85-fc68-4c97-a27b-fc783a1ae47c] Running
	I1205 20:27:09.109604  831680 system_pods.go:61] "storage-provisioner" [bc98964a-3b9e-4e28-8503-ef8578884db4] Running
	I1205 20:27:09.109610  831680 system_pods.go:74] duration metric: took 3.455983098s to wait for pod list to return data ...
	I1205 20:27:09.109622  831680 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:27:09.112252  831680 default_sa.go:45] found service account: "default"
	I1205 20:27:09.112277  831680 default_sa.go:55] duration metric: took 2.64869ms for default service account to be created ...
	I1205 20:27:09.112285  831680 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:27:09.121090  831680 system_pods.go:86] 19 kube-system pods found
	I1205 20:27:09.121119  831680 system_pods.go:89] "amd-gpu-device-plugin-rc729" [c2c85683-d2fe-4fe5-bee0-cb72305ef72e] Running
	I1205 20:27:09.121125  831680 system_pods.go:89] "coredns-7c65d6cfc9-dkkxw" [ab688262-31c0-4d73-84f9-79988d76bb32] Running
	I1205 20:27:09.121129  831680 system_pods.go:89] "csi-hostpath-attacher-0" [5d14e0fd-b0e0-467f-b1cb-d8385382d57e] Running
	I1205 20:27:09.121133  831680 system_pods.go:89] "csi-hostpath-resizer-0" [e9117e43-09b3-4a31-8336-6610a83137be] Running
	I1205 20:27:09.121137  831680 system_pods.go:89] "csi-hostpathplugin-xjjqm" [e76e7df4-19a0-4da7-959e-77806daa2ad0] Running
	I1205 20:27:09.121140  831680 system_pods.go:89] "etcd-addons-583828" [0e09f289-f6cc-4d00-8613-be519b92139f] Running
	I1205 20:27:09.121144  831680 system_pods.go:89] "kindnet-dfgk2" [853b95db-fec0-426a-809a-05c807358dfa] Running
	I1205 20:27:09.121148  831680 system_pods.go:89] "kube-apiserver-addons-583828" [3efa3769-d977-4896-922f-f11b696b2661] Running
	I1205 20:27:09.121152  831680 system_pods.go:89] "kube-controller-manager-addons-583828" [c763df0e-ccca-4c39-bf2f-a7e3393f34db] Running
	I1205 20:27:09.121155  831680 system_pods.go:89] "kube-ingress-dns-minikube" [7fdb2265-3f78-4fd7-9f95-2ee7d4361c8c] Running
	I1205 20:27:09.121159  831680 system_pods.go:89] "kube-proxy-7b2sh" [80fbfc76-9441-46fa-b36f-0b4c43010444] Running
	I1205 20:27:09.121162  831680 system_pods.go:89] "kube-scheduler-addons-583828" [5c1ad2e6-957a-4098-b3b6-efe050ca5709] Running
	I1205 20:27:09.121169  831680 system_pods.go:89] "metrics-server-84c5f94fbc-lc9cp" [30aaf999-d2c9-45af-b24e-e74e1c57353b] Running
	I1205 20:27:09.121175  831680 system_pods.go:89] "nvidia-device-plugin-daemonset-5zspz" [640da076-aa23-44e4-8e0d-03530daed62f] Running
	I1205 20:27:09.121179  831680 system_pods.go:89] "registry-66c9cd494c-z49gz" [fe21bb58-8336-4e34-b5f4-ad786e9a2fac] Running
	I1205 20:27:09.121182  831680 system_pods.go:89] "registry-proxy-fzjzn" [6dd2b29c-df34-4531-be7e-32c564376c8d] Running
	I1205 20:27:09.121186  831680 system_pods.go:89] "snapshot-controller-56fcc65765-9xqwt" [56140c8a-3229-4005-b2ff-25c148dd6e76] Running
	I1205 20:27:09.121194  831680 system_pods.go:89] "snapshot-controller-56fcc65765-wwprs" [cc942f85-fc68-4c97-a27b-fc783a1ae47c] Running
	I1205 20:27:09.121197  831680 system_pods.go:89] "storage-provisioner" [bc98964a-3b9e-4e28-8503-ef8578884db4] Running
	I1205 20:27:09.121205  831680 system_pods.go:126] duration metric: took 8.913738ms to wait for k8s-apps to be running ...
	I1205 20:27:09.121212  831680 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:27:09.121264  831680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:27:09.133668  831680 system_svc.go:56] duration metric: took 12.443276ms WaitForService to wait for kubelet
	I1205 20:27:09.133703  831680 kubeadm.go:582] duration metric: took 1m46.803049203s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:27:09.133727  831680 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:27:09.136734  831680 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1205 20:27:09.136766  831680 node_conditions.go:123] node cpu capacity is 8
	I1205 20:27:09.136783  831680 node_conditions.go:105] duration metric: took 3.050647ms to run NodePressure ...
	I1205 20:27:09.136798  831680 start.go:241] waiting for startup goroutines ...
	I1205 20:27:09.136807  831680 start.go:246] waiting for cluster config update ...
	I1205 20:27:09.136828  831680 start.go:255] writing updated cluster config ...
	I1205 20:27:09.137171  831680 ssh_runner.go:195] Run: rm -f paused
	I1205 20:27:09.190358  831680 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:27:09.193651  831680 out.go:177] * Done! kubectl is now configured to use "addons-583828" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 20:29:17 addons-583828 crio[1025]: time="2024-12-05 20:29:17.015601724Z" level=info msg="Removed pod sandbox: 54724df61defd84ca0ccd6082bb3a45a5ee3f00472f90c72b7acd3cda43aa5c0" id=f076f8ec-6d20-47e2-9f7b-109952511c57 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 20:29:17 addons-583828 crio[1025]: time="2024-12-05 20:29:17.016186539Z" level=info msg="Stopping pod sandbox: 4c4c6af0a3e9acc699ed5fb9d3f084a4da7c79eeaf7a6b31fb8a045b222aee6e" id=65e069e6-ee04-451e-8bbf-326de32cdef7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 20:29:17 addons-583828 crio[1025]: time="2024-12-05 20:29:17.016231099Z" level=info msg="Stopped pod sandbox (already stopped): 4c4c6af0a3e9acc699ed5fb9d3f084a4da7c79eeaf7a6b31fb8a045b222aee6e" id=65e069e6-ee04-451e-8bbf-326de32cdef7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 20:29:17 addons-583828 crio[1025]: time="2024-12-05 20:29:17.016671174Z" level=info msg="Removing pod sandbox: 4c4c6af0a3e9acc699ed5fb9d3f084a4da7c79eeaf7a6b31fb8a045b222aee6e" id=d0dc4793-9e7f-43c9-9d89-e6dc7191f7f4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 20:29:17 addons-583828 crio[1025]: time="2024-12-05 20:29:17.022804494Z" level=info msg="Removed pod sandbox: 4c4c6af0a3e9acc699ed5fb9d3f084a4da7c79eeaf7a6b31fb8a045b222aee6e" id=d0dc4793-9e7f-43c9-9d89-e6dc7191f7f4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 20:29:17 addons-583828 crio[1025]: time="2024-12-05 20:29:17.023353913Z" level=info msg="Stopping pod sandbox: e7502c0a2abc0824764ccb25d2ed8c9575cd78d62e9d3ec0e2dc6ced36593b36" id=e15c42ce-45d6-4d99-ba62-aab2348378b4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 20:29:17 addons-583828 crio[1025]: time="2024-12-05 20:29:17.023396970Z" level=info msg="Stopped pod sandbox (already stopped): e7502c0a2abc0824764ccb25d2ed8c9575cd78d62e9d3ec0e2dc6ced36593b36" id=e15c42ce-45d6-4d99-ba62-aab2348378b4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 20:29:17 addons-583828 crio[1025]: time="2024-12-05 20:29:17.023733990Z" level=info msg="Removing pod sandbox: e7502c0a2abc0824764ccb25d2ed8c9575cd78d62e9d3ec0e2dc6ced36593b36" id=1182bcc5-a647-4f41-b4bb-ca09c32eeaca name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 20:29:17 addons-583828 crio[1025]: time="2024-12-05 20:29:17.029694840Z" level=info msg="Removed pod sandbox: e7502c0a2abc0824764ccb25d2ed8c9575cd78d62e9d3ec0e2dc6ced36593b36" id=1182bcc5-a647-4f41-b4bb-ca09c32eeaca name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 05 20:29:48 addons-583828 crio[1025]: time="2024-12-05 20:29:48.740819100Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=443eafb0-7c0d-49a1-ae51-6bb97359d680 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:29:48 addons-583828 crio[1025]: time="2024-12-05 20:29:48.741140956Z" level=info msg="Image docker.io/nginx:alpine not found" id=443eafb0-7c0d-49a1-ae51-6bb97359d680 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:30:02 addons-583828 crio[1025]: time="2024-12-05 20:30:02.738254833Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=96d65578-7862-48d6-bdaf-a23ce712f5eb name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:30:02 addons-583828 crio[1025]: time="2024-12-05 20:30:02.738592950Z" level=info msg="Image docker.io/nginx:alpine not found" id=96d65578-7862-48d6-bdaf-a23ce712f5eb name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:30:02 addons-583828 crio[1025]: time="2024-12-05 20:30:02.739151577Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=f029d12a-26dc-4d15-ab42-84eb8c375573 name=/runtime.v1.ImageService/PullImage
	Dec 05 20:30:02 addons-583828 crio[1025]: time="2024-12-05 20:30:02.755850328Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Dec 05 20:31:15 addons-583828 crio[1025]: time="2024-12-05 20:31:15.738449535Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=1076daa0-9414-4080-ab28-bffe07d174c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:31:15 addons-583828 crio[1025]: time="2024-12-05 20:31:15.738777560Z" level=info msg="Image docker.io/nginx:alpine not found" id=1076daa0-9414-4080-ab28-bffe07d174c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:31:30 addons-583828 crio[1025]: time="2024-12-05 20:31:30.738670695Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=11e0e536-bf7d-4e76-87cc-33139556fcb4 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:31:30 addons-583828 crio[1025]: time="2024-12-05 20:31:30.738993527Z" level=info msg="Image docker.io/nginx:alpine not found" id=11e0e536-bf7d-4e76-87cc-33139556fcb4 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:31:43 addons-583828 crio[1025]: time="2024-12-05 20:31:43.738221479Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=fd37c6ea-b623-4b36-a645-1765ac684360 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:31:43 addons-583828 crio[1025]: time="2024-12-05 20:31:43.738545183Z" level=info msg="Image docker.io/nginx:alpine not found" id=fd37c6ea-b623-4b36-a645-1765ac684360 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:31:55 addons-583828 crio[1025]: time="2024-12-05 20:31:55.738714766Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=69647c0b-378c-494d-a15a-5fee20ccf527 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:31:55 addons-583828 crio[1025]: time="2024-12-05 20:31:55.738955217Z" level=info msg="Image docker.io/nginx:alpine not found" id=69647c0b-378c-494d-a15a-5fee20ccf527 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:31:55 addons-583828 crio[1025]: time="2024-12-05 20:31:55.739570126Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=330ac791-7ec1-4bb7-a48e-4fde8fa426d7 name=/runtime.v1.ImageService/PullImage
	Dec 05 20:31:55 addons-583828 crio[1025]: time="2024-12-05 20:31:55.744263530Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dd200ee7a91a4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          5 minutes ago       Running             busybox                   0                   4becac5591990       busybox
	3ee2ba2ec2cef       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             6 minutes ago       Running             controller                0                   e2282262607e1       ingress-nginx-controller-5f85ff4588-c4fhh
	cd1af0bd98187       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             6 minutes ago       Exited              patch                     3                   3d641e4195ff0       ingress-nginx-admission-patch-n769w
	6a529b44bf189       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   6 minutes ago       Exited              create                    0                   f9d6654f6e519       ingress-nginx-admission-create-qdcz4
	19290d766dd43       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             6 minutes ago       Running             minikube-ingress-dns      0                   c4033dafe6f49       kube-ingress-dns-minikube
	e2b63d8828d81       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        6 minutes ago       Running             metrics-server            0                   dc75be1693886       metrics-server-84c5f94fbc-lc9cp
	ef40984194282       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             7 minutes ago       Running             storage-provisioner       0                   4dd5b84d1fd29       storage-provisioner
	978912424ba57       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             7 minutes ago       Running             coredns                   0                   21cfb5b0d810f       coredns-7c65d6cfc9-dkkxw
	ad993918bb3ca       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16                           7 minutes ago       Running             kindnet-cni               0                   94fa1d19b901b       kindnet-dfgk2
	42459303e80f3       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             7 minutes ago       Running             kube-proxy                0                   532032805d930       kube-proxy-7b2sh
	98a4ad0de8f4c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             7 minutes ago       Running             kube-apiserver            0                   82de1aca89145       kube-apiserver-addons-583828
	feeb541e697ad       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             7 minutes ago       Running             etcd                      0                   6b8b546dc20c4       etcd-addons-583828
	c841c0b382894       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             7 minutes ago       Running             kube-scheduler            0                   b4ff6cab61172       kube-scheduler-addons-583828
	554c27961eea1       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             7 minutes ago       Running             kube-controller-manager   0                   5b89e979ff58f       kube-controller-manager-addons-583828
	
	
	==> coredns [978912424ba571d40b90e45448878d2722100731d5da494944e65e91c944a629] <==
	[INFO] 10.244.0.19:56539 - 10779 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099077s
	[INFO] 10.244.0.19:60113 - 62198 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004658074s
	[INFO] 10.244.0.19:60113 - 62439 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004793049s
	[INFO] 10.244.0.19:42897 - 16947 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005879233s
	[INFO] 10.244.0.19:42897 - 16668 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006192356s
	[INFO] 10.244.0.19:52883 - 60026 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00578623s
	[INFO] 10.244.0.19:52883 - 59813 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005887045s
	[INFO] 10.244.0.19:41021 - 42702 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000078591s
	[INFO] 10.244.0.19:41021 - 42253 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121762s
	[INFO] 10.244.0.21:49427 - 55970 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000230552s
	[INFO] 10.244.0.21:42081 - 61281 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000288918s
	[INFO] 10.244.0.21:46440 - 30975 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000166915s
	[INFO] 10.244.0.21:54236 - 31133 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00016539s
	[INFO] 10.244.0.21:50537 - 63442 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123188s
	[INFO] 10.244.0.21:59373 - 27825 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000153511s
	[INFO] 10.244.0.21:46412 - 12778 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005744036s
	[INFO] 10.244.0.21:55115 - 16737 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.006700755s
	[INFO] 10.244.0.21:55800 - 44793 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00487147s
	[INFO] 10.244.0.21:55627 - 40386 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005406998s
	[INFO] 10.244.0.21:60313 - 33442 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007137006s
	[INFO] 10.244.0.21:53320 - 23314 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007268539s
	[INFO] 10.244.0.21:45779 - 19345 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000886948s
	[INFO] 10.244.0.21:34651 - 50515 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001026868s
	[INFO] 10.244.0.25:60752 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000382497s
	[INFO] 10.244.0.25:35918 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00019061s
	
	
	==> describe nodes <==
	Name:               addons-583828
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-583828
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=addons-583828
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_25_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-583828
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:25:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-583828
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:32:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:28:20 +0000   Thu, 05 Dec 2024 20:25:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:28:20 +0000   Thu, 05 Dec 2024 20:25:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:28:20 +0000   Thu, 05 Dec 2024 20:25:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:28:20 +0000   Thu, 05 Dec 2024 20:25:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-583828
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5cdc1a1dcb246fca33732e03f1ddc97
	  System UUID:                49ad83b1-9a0e-4726-8ae1-8ba9c7e57d54
	  Boot ID:                    39024a98-8447-46b2-bbc5-7915429b9c2d
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-c4fhh    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         7m18s
	  kube-system                 coredns-7c65d6cfc9-dkkxw                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m24s
	  kube-system                 etcd-addons-583828                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m30s
	  kube-system                 kindnet-dfgk2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m25s
	  kube-system                 kube-apiserver-addons-583828                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 kube-controller-manager-addons-583828        200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m20s
	  kube-system                 kube-proxy-7b2sh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m25s
	  kube-system                 kube-scheduler-addons-583828                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 metrics-server-84c5f94fbc-lc9cp              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         7m19s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 7m20s  kube-proxy       
	  Normal   Starting                 7m30s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m30s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m30s  kubelet          Node addons-583828 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m30s  kubelet          Node addons-583828 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m30s  kubelet          Node addons-583828 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m25s  node-controller  Node addons-583828 event: Registered Node addons-583828 in Controller
	  Normal   NodeReady                7m5s   kubelet          Node addons-583828 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 9e 58 22 0d b9 08 06
	[ +28.753910] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 78 7a 98 fe 25 08 06
	[  +1.292059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e 28 6f da 79 a6 08 06
	[  +0.021715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e c3 0d 92 91 5a 08 06
	[Dec 5 20:11] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 58 3b a6 8d 40 08 06
	[ +30.901947] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 3c 09 52 3d e1 08 06
	[  +1.444771] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 03 05 4c 3e 73 08 06
	[  +0.058589] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 48 98 e5 23 33 08 06
	[  +6.156143] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 10 f3 a9 91 d9 08 06
	[Dec 5 20:12] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 18 0d f3 3a 83 08 06
	[  +1.482986] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce c3 68 13 fd 23 08 06
	[  +0.033369] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 8a 70 ff f0 d7 08 06
	[  +6.306172] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca ef 8b ac b6 8f 08 06
	
	
	==> etcd [feeb541e697add202f6fa8fd71a08502c927b49ed6d2db518a81f341716e3201] <==
	{"level":"info","ts":"2024-12-05T20:25:26.730232Z","caller":"traceutil/trace.go:171","msg":"trace[936052187] linearizableReadLoop","detail":"{readStateIndex:445; appliedIndex:443; }","duration":"197.247445ms","start":"2024-12-05T20:25:26.532967Z","end":"2024-12-05T20:25:26.730214Z","steps":["trace[936052187] 'read index received'  (duration: 78.358618ms)","trace[936052187] 'applied index is now lower than readState.Index'  (duration: 118.886702ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T20:25:26.731906Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.920002ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-583828\" ","response":"range_response_count:1 size:5655"}
	{"level":"info","ts":"2024-12-05T20:25:26.731954Z","caller":"traceutil/trace.go:171","msg":"trace[869354678] range","detail":"{range_begin:/registry/minions/addons-583828; range_end:; response_count:1; response_revision:435; }","duration":"198.979156ms","start":"2024-12-05T20:25:26.532962Z","end":"2024-12-05T20:25:26.731941Z","steps":["trace[869354678] 'agreement among raft nodes before linearized reading'  (duration: 198.839639ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:25:26.732364Z","caller":"traceutil/trace.go:171","msg":"trace[1932728294] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"199.340257ms","start":"2024-12-05T20:25:26.533012Z","end":"2024-12-05T20:25:26.732352Z","steps":["trace[1932728294] 'process raft request'  (duration: 193.537191ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:25:26.809222Z","caller":"traceutil/trace.go:171","msg":"trace[1215879915] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"276.099106ms","start":"2024-12-05T20:25:26.533090Z","end":"2024-12-05T20:25:26.809189Z","steps":["trace[1215879915] 'process raft request'  (duration: 193.494264ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:25:26.809623Z","caller":"traceutil/trace.go:171","msg":"trace[1551307167] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"276.289138ms","start":"2024-12-05T20:25:26.533317Z","end":"2024-12-05T20:25:26.809606Z","steps":["trace[1551307167] 'process raft request'  (duration: 193.349913ms)","trace[1551307167] 'compare'  (duration: 82.407897ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T20:25:26.809944Z","caller":"traceutil/trace.go:171","msg":"trace[1466606709] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"100.393382ms","start":"2024-12-05T20:25:26.709540Z","end":"2024-12-05T20:25:26.809933Z","steps":["trace[1466606709] 'process raft request'  (duration: 99.691152ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:25:26.809949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.397075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:25:26.812524Z","caller":"traceutil/trace.go:171","msg":"trace[766908047] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:439; }","duration":"200.97222ms","start":"2024-12-05T20:25:26.611533Z","end":"2024-12-05T20:25:26.812506Z","steps":["trace[766908047] 'agreement among raft nodes before linearized reading'  (duration: 198.378746ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:25:26.809689Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.177847ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:25:26.819703Z","caller":"traceutil/trace.go:171","msg":"trace[87578902] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:0; response_revision:439; }","duration":"286.184954ms","start":"2024-12-05T20:25:26.533487Z","end":"2024-12-05T20:25:26.819672Z","steps":["trace[87578902] 'agreement among raft nodes before linearized reading'  (duration: 275.942532ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:25:26.809965Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.492589ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/local-path-provisioner-role\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:25:26.820828Z","caller":"traceutil/trace.go:171","msg":"trace[681513394] range","detail":"{range_begin:/registry/clusterroles/local-path-provisioner-role; range_end:; response_count:0; response_revision:439; }","duration":"111.354421ms","start":"2024-12-05T20:25:26.709456Z","end":"2024-12-05T20:25:26.820810Z","steps":["trace[681513394] 'agreement among raft nodes before linearized reading'  (duration: 100.447614ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:25:26.810157Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.487452ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:25:26.810302Z","caller":"traceutil/trace.go:171","msg":"trace[503414887] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"100.686269ms","start":"2024-12-05T20:25:26.709603Z","end":"2024-12-05T20:25:26.810290Z","steps":["trace[503414887] 'process raft request'  (duration: 99.688666ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:25:26.828003Z","caller":"traceutil/trace.go:171","msg":"trace[1437894167] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:439; }","duration":"118.320015ms","start":"2024-12-05T20:25:26.709659Z","end":"2024-12-05T20:25:26.827979Z","steps":["trace[1437894167] 'agreement among raft nodes before linearized reading'  (duration: 100.478382ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:26:28.626249Z","caller":"traceutil/trace.go:171","msg":"trace[88854942] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"116.744993ms","start":"2024-12-05T20:26:28.509486Z","end":"2024-12-05T20:26:28.626231Z","steps":["trace[88854942] 'process raft request'  (duration: 116.610104ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:26:28.626441Z","caller":"traceutil/trace.go:171","msg":"trace[1904650908] linearizableReadLoop","detail":"{readStateIndex:1192; appliedIndex:1192; }","duration":"116.655701ms","start":"2024-12-05T20:26:28.509773Z","end":"2024-12-05T20:26:28.626429Z","steps":["trace[1904650908] 'read index received'  (duration: 116.649101ms)","trace[1904650908] 'applied index is now lower than readState.Index'  (duration: 5.275µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T20:26:28.626551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.750114ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:26:28.627089Z","caller":"traceutil/trace.go:171","msg":"trace[795612741] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:1160; }","duration":"117.303042ms","start":"2024-12-05T20:26:28.509768Z","end":"2024-12-05T20:26:28.627071Z","steps":["trace[795612741] 'agreement among raft nodes before linearized reading'  (duration: 116.695105ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:26:28.627970Z","caller":"traceutil/trace.go:171","msg":"trace[452118737] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"118.020962ms","start":"2024-12-05T20:26:28.509930Z","end":"2024-12-05T20:26:28.627951Z","steps":["trace[452118737] 'process raft request'  (duration: 117.908388ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:26:28.627986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.585638ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:26:28.628022Z","caller":"traceutil/trace.go:171","msg":"trace[643759814] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1161; }","duration":"106.630714ms","start":"2024-12-05T20:26:28.521382Z","end":"2024-12-05T20:26:28.628013Z","steps":["trace[643759814] 'agreement among raft nodes before linearized reading'  (duration: 106.548916ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:26:47.356770Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.866849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-lc9cp\" ","response":"range_response_count:1 size:4862"}
	{"level":"info","ts":"2024-12-05T20:26:47.356853Z","caller":"traceutil/trace.go:171","msg":"trace[1070485999] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-lc9cp; range_end:; response_count:1; response_revision:1237; }","duration":"104.96485ms","start":"2024-12-05T20:26:47.251869Z","end":"2024-12-05T20:26:47.356834Z","steps":["trace[1070485999] 'range keys from in-memory index tree'  (duration: 104.712038ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:32:46 up  3:15,  0 users,  load average: 0.09, 0.59, 2.19
	Linux addons-583828 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [ad993918bb3ca8e1603045e9dc81e54da924d5c34b4c9ffbdbe009e36c6f697c] <==
	I1205 20:30:41.117690       1 main.go:301] handling current node
	I1205 20:30:51.117575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:30:51.117618       1 main.go:301] handling current node
	I1205 20:31:01.113008       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:31:01.113058       1 main.go:301] handling current node
	I1205 20:31:11.114727       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:31:11.114778       1 main.go:301] handling current node
	I1205 20:31:21.113038       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:31:21.113099       1 main.go:301] handling current node
	I1205 20:31:31.110756       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:31:31.110797       1 main.go:301] handling current node
	I1205 20:31:41.116981       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:31:41.117022       1 main.go:301] handling current node
	I1205 20:31:51.113018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:31:51.113072       1 main.go:301] handling current node
	I1205 20:32:01.110086       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:32:01.110133       1 main.go:301] handling current node
	I1205 20:32:11.114475       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:32:11.114532       1 main.go:301] handling current node
	I1205 20:32:21.118801       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:32:21.118845       1 main.go:301] handling current node
	I1205 20:32:31.110887       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:32:31.110925       1 main.go:301] handling current node
	I1205 20:32:41.114190       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:32:41.114229       1 main.go:301] handling current node
	
	
	==> kube-apiserver [98a4ad0de8f4c261ce3a1d3b239fa0d90fa12f5c07a273a1f61f9493d4604889] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 20:27:03.809289       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1205 20:27:19.082649       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37954: use of closed network connection
	E1205 20:27:19.260421       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37990: use of closed network connection
	I1205 20:27:28.347350       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.86.24"}
	I1205 20:28:01.484387       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1205 20:28:06.071833       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1205 20:28:13.486077       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1205 20:28:14.502367       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1205 20:28:18.964986       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1205 20:28:19.165810       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.167.39"}
	I1205 20:28:27.470478       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:28:27.470622       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 20:28:27.484594       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:28:27.484744       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 20:28:27.486350       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:28:27.486389       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 20:28:27.530742       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:28:27.530900       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 20:28:27.622937       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:28:27.622982       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1205 20:28:28.486747       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1205 20:28:28.623008       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1205 20:28:28.725732       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [554c27961eea1e555670e46e9578b5d55fc2338b4c3aa9045e74a3188fe53578] <==
	E1205 20:30:10.610853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:30:34.590971       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:30:34.591025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:30:38.579395       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:30:38.579445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:30:53.198614       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:30:53.198675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:31:03.085976       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:31:03.086027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:31:19.319164       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:31:19.319223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:31:27.015862       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:31:27.015913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:31:33.535210       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:31:33.535257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:31:49.094950       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:31:49.095018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:31:53.111903       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:31:53.111952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:32:19.112811       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:32:19.112865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:32:29.709706       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:32:29.709761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:32:40.552170       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:32:40.552220       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [42459303e80f3737dcbfcff00d249bf4d4df8c862c4e0653bd13c6506302e8e8] <==
	I1205 20:25:22.519810       1 server_linux.go:66] "Using iptables proxy"
	I1205 20:25:23.214814       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1205 20:25:23.214968       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:25:24.815600       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 20:25:24.815822       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:25:25.127338       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:25:25.217908       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:25:25.217961       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:25:25.220071       1 config.go:199] "Starting service config controller"
	I1205 20:25:25.220167       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:25:25.220233       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:25:25.220260       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:25:25.220993       1 config.go:328] "Starting node config controller"
	I1205 20:25:25.221110       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:25:25.320398       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:25:25.510134       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:25:25.522410       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c841c0b3828944892e5a6cc75ea5e4a34541410b15d0b16531beabb02de2ce01] <==
	W1205 20:25:14.423700       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:25:14.423729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:25:14.423878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:25:14.423921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:25:14.423953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1205 20:25:14.423888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 20:25:14.423991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E1205 20:25:14.423992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1205 20:25:14.424025       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1205 20:25:14.424048       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 20:25:14.424056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:25:14.424079       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1205 20:25:14.424047       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:25:14.424111       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1205 20:25:14.424111       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1205 20:25:14.424080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:25:15.289746       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:25:15.289800       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 20:25:15.305483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 20:25:15.305528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:25:15.474192       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:25:15.474234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:25:15.508708       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:25:15.508751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1205 20:25:18.520013       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 20:31:03 addons-583828 kubelet[1621]: E1205 20:31:03.867227    1621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="06f0ad05-fff2-461e-9051-b1a79714bd25"
	Dec 05 20:31:06 addons-583828 kubelet[1621]: E1205 20:31:06.906742    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430666906476269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:31:06 addons-583828 kubelet[1621]: E1205 20:31:06.906773    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430666906476269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:31:15 addons-583828 kubelet[1621]: E1205 20:31:15.739117    1621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="06f0ad05-fff2-461e-9051-b1a79714bd25"
	Dec 05 20:31:16 addons-583828 kubelet[1621]: E1205 20:31:16.908767    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430676908490296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:31:16 addons-583828 kubelet[1621]: E1205 20:31:16.908815    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430676908490296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:31:26 addons-583828 kubelet[1621]: E1205 20:31:26.911479    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430686911254913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:31:26 addons-583828 kubelet[1621]: E1205 20:31:26.911517    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430686911254913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:31:30 addons-583828 kubelet[1621]: E1205 20:31:30.739276    1621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="06f0ad05-fff2-461e-9051-b1a79714bd25"
	Dec 05 20:31:36 addons-583828 kubelet[1621]: E1205 20:31:36.914422    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430696914159223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:31:36 addons-583828 kubelet[1621]: E1205 20:31:36.914459    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430696914159223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:31:43 addons-583828 kubelet[1621]: E1205 20:31:43.738871    1621 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="06f0ad05-fff2-461e-9051-b1a79714bd25"
	Dec 05 20:31:46 addons-583828 kubelet[1621]: E1205 20:31:46.916789    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430706916491104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:31:46 addons-583828 kubelet[1621]: E1205 20:31:46.916832    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430706916491104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:31:56 addons-583828 kubelet[1621]: E1205 20:31:56.919434    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430716919131660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:31:56 addons-583828 kubelet[1621]: E1205 20:31:56.919479    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430716919131660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:32:06 addons-583828 kubelet[1621]: E1205 20:32:06.922476    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430726922175119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:32:06 addons-583828 kubelet[1621]: E1205 20:32:06.922521    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430726922175119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:32:16 addons-583828 kubelet[1621]: E1205 20:32:16.925552    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430736925253093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:32:16 addons-583828 kubelet[1621]: E1205 20:32:16.925598    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430736925253093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:32:26 addons-583828 kubelet[1621]: I1205 20:32:26.738666    1621 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 20:32:26 addons-583828 kubelet[1621]: E1205 20:32:26.928218    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430746927913283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:32:26 addons-583828 kubelet[1621]: E1205 20:32:26.928257    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430746927913283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:32:36 addons-583828 kubelet[1621]: E1205 20:32:36.931306    1621 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430756931024540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:32:36 addons-583828 kubelet[1621]: E1205 20:32:36.931344    1621 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430756931024540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585326,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [ef4098419428227b4d6972e656cc06bea872aea3e97c16b0c7340af1fd6d5cb5] <==
	I1205 20:25:42.556706       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:25:42.566741       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:25:42.566799       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:25:42.613971       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:25:42.614127       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"611609cd-4cc0-441e-94ab-a2e2be13b4e9", APIVersion:"v1", ResourceVersion:"894", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-583828_494b9d68-e7cd-4c8a-a94c-6c912f7efe5f became leader
	I1205 20:25:42.614218       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-583828_494b9d68-e7cd-4c8a-a94c-6c912f7efe5f!
	I1205 20:25:42.715334       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-583828_494b9d68-e7cd-4c8a-a94c-6c912f7efe5f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-583828 -n addons-583828
helpers_test.go:261: (dbg) Run:  kubectl --context addons-583828 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx ingress-nginx-admission-create-qdcz4 ingress-nginx-admission-patch-n769w
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-583828 describe pod nginx ingress-nginx-admission-create-qdcz4 ingress-nginx-admission-patch-n769w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-583828 describe pod nginx ingress-nginx-admission-create-qdcz4 ingress-nginx-admission-patch-n769w: exit status 1 (71.175955ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-583828/192.168.49.2
	Start Time:       Thu, 05 Dec 2024 20:28:19 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2wdtd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2wdtd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m28s                 default-scheduler  Successfully assigned default/nginx to addons-583828
	  Warning  Failed     3m55s                 kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m13s                 kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     104s (x3 over 3m55s)  kubelet            Error: ErrImagePull
	  Warning  Failed     104s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    64s (x5 over 3m55s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     64s (x5 over 3m55s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    52s (x4 over 4m28s)   kubelet            Pulling image "docker.io/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qdcz4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-n769w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-583828 describe pod nginx ingress-nginx-admission-create-qdcz4 ingress-nginx-admission-patch-n769w: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (320.00s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f7faf603-beb5-47be-88c2-65e6706e0edd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004673744s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-035676 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-035676 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-035676 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-035676 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8105cd9e-6de0-45ef-bdae-b7bee83bd8d0] Pending
helpers_test.go:344: "sp-pod" [8105cd9e-6de0-45ef-bdae-b7bee83bd8d0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-035676 -n functional-035676
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-12-05 20:42:21.857266417 +0000 UTC m=+1075.665101215
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-035676 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-035676 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-035676/192.168.49.2
Start Time:       Thu, 05 Dec 2024 20:39:21 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-66k9w (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-66k9w:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  3m                default-scheduler  Successfully assigned default/sp-pod to functional-035676
Warning  Failed     91s               kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     91s               kubelet            Error: ErrImagePull
Normal   BackOff    91s               kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     91s               kubelet            Error: ImagePullBackOff
Normal   Pulling    79s (x2 over 3m)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-035676 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-035676 logs sp-pod -n default: exit status 1 (67.597418ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-035676 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-035676
helpers_test.go:235: (dbg) docker inspect functional-035676:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e2affe68c424dcd2eed529d24f6868c08df93d16ca9f717c71a98b0545ef3ab7",
	        "Created": "2024-12-05T20:37:27.747322592Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 857131,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-05T20:37:27.865185441Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/e2affe68c424dcd2eed529d24f6868c08df93d16ca9f717c71a98b0545ef3ab7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e2affe68c424dcd2eed529d24f6868c08df93d16ca9f717c71a98b0545ef3ab7/hostname",
	        "HostsPath": "/var/lib/docker/containers/e2affe68c424dcd2eed529d24f6868c08df93d16ca9f717c71a98b0545ef3ab7/hosts",
	        "LogPath": "/var/lib/docker/containers/e2affe68c424dcd2eed529d24f6868c08df93d16ca9f717c71a98b0545ef3ab7/e2affe68c424dcd2eed529d24f6868c08df93d16ca9f717c71a98b0545ef3ab7-json.log",
	        "Name": "/functional-035676",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-035676:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-035676",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2da7b73e473ace45dc027636e6b2040736c41b8d0f04592aea75fcfc908de100-init/diff:/var/lib/docker/overlay2/0f5bc7fa09e0d0f29301db80becc3339e358e049d584dfb307a79bde49527770/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2da7b73e473ace45dc027636e6b2040736c41b8d0f04592aea75fcfc908de100/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2da7b73e473ace45dc027636e6b2040736c41b8d0f04592aea75fcfc908de100/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2da7b73e473ace45dc027636e6b2040736c41b8d0f04592aea75fcfc908de100/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-035676",
	                "Source": "/var/lib/docker/volumes/functional-035676/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-035676",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-035676",
	                "name.minikube.sigs.k8s.io": "functional-035676",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "76d3a2afa4994b5f0c602452ecfa7d9b636e228d4700a2725d3a9a82d57dd536",
	            "SandboxKey": "/var/run/docker/netns/76d3a2afa499",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-035676": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "731bc11d1bd3da4dc51139780fcf291dcd693b3a8e7700749619b288cdd87458",
	                    "EndpointID": "393b85eaf9f7f3f8f75b5df6a7afb2d2dd1075df885c9aa85f3e07acad4823bf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-035676",
	                        "e2affe68c424"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-035676 -n functional-035676
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-035676 logs -n 25: (1.499448275s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                    Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-035676 ssh sudo cat                                             | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | /etc/ssl/certs/51391683.0                                                  |                   |         |         |                     |                     |
	| image          | functional-035676 image load --daemon                                      | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | kicbase/echo-server:functional-035676                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-035676 ssh sudo cat                                             | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | /etc/ssl/certs/8303812.pem                                                 |                   |         |         |                     |                     |
	| image          | functional-035676 image ls                                                 | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	| ssh            | functional-035676 ssh sudo cat                                             | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | /usr/share/ca-certificates/8303812.pem                                     |                   |         |         |                     |                     |
	| ssh            | functional-035676 ssh sudo cat                                             | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                                  |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                         | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:41 UTC |
	|                | -p functional-035676                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                     |                   |         |         |                     |                     |
	| image          | functional-035676 image save kicbase/echo-server:functional-035676         | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-035676 image rm                                                 | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | kicbase/echo-server:functional-035676                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-035676 image ls                                                 | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	| image          | functional-035676 image load                                               | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| service        | functional-035676 service                                                  | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | hello-node-connect --url                                                   |                   |         |         |                     |                     |
	| addons         | functional-035676 addons list                                              | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	| addons         | functional-035676 addons list                                              | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | -o json                                                                    |                   |         |         |                     |                     |
	| ssh            | functional-035676 ssh sudo cat                                             | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | /etc/test/nested/copy/830381/hosts                                         |                   |         |         |                     |                     |
	| update-context | functional-035676                                                          | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-035676                                                          | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-035676                                                          | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| image          | functional-035676                                                          | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | image ls --format short                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-035676                                                          | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | image ls --format json                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-035676                                                          | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | image ls --format table                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-035676                                                          | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | image ls --format yaml                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-035676 ssh pgrep                                                | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC |                     |
	|                | buildkitd                                                                  |                   |         |         |                     |                     |
	| image          | functional-035676 image build -t                                           | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | localhost/my-image:functional-035676                                       |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                           |                   |         |         |                     |                     |
	| image          | functional-035676 image ls                                                 | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:39:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:39:24.385769  870931 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:39:24.385888  870931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:39:24.385894  870931 out.go:358] Setting ErrFile to fd 2...
	I1205 20:39:24.385898  870931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:39:24.386220  870931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
	I1205 20:39:24.386774  870931 out.go:352] Setting JSON to false
	I1205 20:39:24.387885  870931 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12113,"bootTime":1733419051,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:39:24.388009  870931 start.go:139] virtualization: kvm guest
	I1205 20:39:24.390178  870931 out.go:177] * [functional-035676] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1205 20:39:24.391705  870931 notify.go:220] Checking for updates...
	I1205 20:39:24.391712  870931 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:39:24.393211  870931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:39:24.394693  870931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig
	I1205 20:39:24.395973  870931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube
	I1205 20:39:24.397199  870931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:39:24.398443  870931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:39:24.400113  870931 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:39:24.400531  870931 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:39:24.422225  870931 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 20:39:24.422323  870931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:39:24.477630  870931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2024-12-05 20:39:24.467300148 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:39:24.477769  870931 docker.go:318] overlay module found
	I1205 20:39:24.480254  870931 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1205 20:39:24.481752  870931 start.go:297] selected driver: docker
	I1205 20:39:24.481768  870931 start.go:901] validating driver "docker" against &{Name:functional-035676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-035676 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:39:24.481862  870931 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:39:24.484129  870931 out.go:201] 
	W1205 20:39:24.485549  870931 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 20:39:24.486824  870931 out.go:201] 
	
	
	==> CRI-O <==
	Dec 05 20:40:56 functional-035676 crio[4905]: time="2024-12-05 20:40:56.566678887Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,RepoTags:[],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029],Size_:249229937,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=8c9a2ee5-3c80-4aec-b70e-b0ec3f01ad07 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:40:56 functional-035676 crio[4905]: time="2024-12-05 20:40:56.567229326Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=1484b0ff-2b94-4938-8039-c02c87f594a5 name=/runtime.v1.ImageService/PullImage
	Dec 05 20:40:56 functional-035676 crio[4905]: time="2024-12-05 20:40:56.567594834Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=2ef111ea-649c-4d11-82e9-849bdfaa6072 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:40:56 functional-035676 crio[4905]: time="2024-12-05 20:40:56.568601463Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Dec 05 20:40:56 functional-035676 crio[4905]: time="2024-12-05 20:40:56.568754087Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,RepoTags:[],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029],Size_:249229937,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=2ef111ea-649c-4d11-82e9-849bdfaa6072 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:40:56 functional-035676 crio[4905]: time="2024-12-05 20:40:56.569619282Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-695b96c756-s6qrs/kubernetes-dashboard" id=26698ccb-fdff-4699-87d7-f70807c73ace name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 20:40:56 functional-035676 crio[4905]: time="2024-12-05 20:40:56.569712136Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 20:40:56 functional-035676 crio[4905]: time="2024-12-05 20:40:56.581871132Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/eb2d6388eb55dc09cb641f0a7d5b82287833b7e6ee8ab9eed3a329433ef31fe0/merged/etc/group: no such file or directory"
	Dec 05 20:40:56 functional-035676 crio[4905]: time="2024-12-05 20:40:56.618232620Z" level=info msg="Created container c8f01a50e6ca491bda86f844a9d299010f832db6d7dc81378a4107c75673af23: kubernetes-dashboard/kubernetes-dashboard-695b96c756-s6qrs/kubernetes-dashboard" id=26698ccb-fdff-4699-87d7-f70807c73ace name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 20:40:56 functional-035676 crio[4905]: time="2024-12-05 20:40:56.618944444Z" level=info msg="Starting container: c8f01a50e6ca491bda86f844a9d299010f832db6d7dc81378a4107c75673af23" id=4b82a88a-682f-4bb1-85f9-514353cc5266 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 20:40:56 functional-035676 crio[4905]: time="2024-12-05 20:40:56.625398993Z" level=info msg="Started container" PID=8058 containerID=c8f01a50e6ca491bda86f844a9d299010f832db6d7dc81378a4107c75673af23 description=kubernetes-dashboard/kubernetes-dashboard-695b96c756-s6qrs/kubernetes-dashboard id=4b82a88a-682f-4bb1-85f9-514353cc5266 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0137ed4edec2531081f22e9faa506587b8f88bd9a235213db2c1efb11bc2f3d
	Dec 05 20:41:02 functional-035676 crio[4905]: time="2024-12-05 20:41:02.824755116Z" level=info msg="Checking image status: docker.io/nginx:latest" id=64d33efc-831d-480c-8d91-c7b8ccc4c23d name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:41:02 functional-035676 crio[4905]: time="2024-12-05 20:41:02.825060381Z" level=info msg="Image docker.io/nginx:latest not found" id=64d33efc-831d-480c-8d91-c7b8ccc4c23d name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:41:27 functional-035676 crio[4905]: time="2024-12-05 20:41:27.176824342Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=6e6a56f4-55fc-4c32-8674-a9c5c7d83b3a name=/runtime.v1.ImageService/PullImage
	Dec 05 20:41:27 functional-035676 crio[4905]: time="2024-12-05 20:41:27.178266804Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Dec 05 20:41:27 functional-035676 crio[4905]: time="2024-12-05 20:41:27.318362154Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=edecba92-520f-41e0-a9dc-3b8cea897bbd name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:41:27 functional-035676 crio[4905]: time="2024-12-05 20:41:27.318647531Z" level=info msg="Image docker.io/mysql:5.7 not found" id=edecba92-520f-41e0-a9dc-3b8cea897bbd name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:41:40 functional-035676 crio[4905]: time="2024-12-05 20:41:40.824288388Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=df2163ba-5cad-487f-95d8-b23f2bef4511 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:41:40 functional-035676 crio[4905]: time="2024-12-05 20:41:40.824604035Z" level=info msg="Image docker.io/mysql:5.7 not found" id=df2163ba-5cad-487f-95d8-b23f2bef4511 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:41:57 functional-035676 crio[4905]: time="2024-12-05 20:41:57.793361042Z" level=info msg="Pulling image: docker.io/nginx:latest" id=948309bd-ea44-42e8-a238-7787139c3d27 name=/runtime.v1.ImageService/PullImage
	Dec 05 20:41:57 functional-035676 crio[4905]: time="2024-12-05 20:41:57.810444406Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Dec 05 20:42:07 functional-035676 crio[4905]: time="2024-12-05 20:42:07.824547178Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=74c54f50-5132-43ef-ba6c-f9eb06fb8bcb name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:42:07 functional-035676 crio[4905]: time="2024-12-05 20:42:07.824790926Z" level=info msg="Image docker.io/nginx:alpine not found" id=74c54f50-5132-43ef-ba6c-f9eb06fb8bcb name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:42:18 functional-035676 crio[4905]: time="2024-12-05 20:42:18.824870951Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=80a06327-8fe1-47c3-bc65-90353a7eb379 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:42:18 functional-035676 crio[4905]: time="2024-12-05 20:42:18.825181745Z" level=info msg="Image docker.io/nginx:alpine not found" id=80a06327-8fe1-47c3-bc65-90353a7eb379 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	c8f01a50e6ca4       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         About a minute ago   Running             kubernetes-dashboard        0                   c0137ed4edec2       kubernetes-dashboard-695b96c756-s6qrs
	24b513dc663dc       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   About a minute ago   Running             dashboard-metrics-scraper   0                   955f2725bf93b       dashboard-metrics-scraper-c5db448b4-gwx6g
	163d6ff8e1abf       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 2 minutes ago        Running             echoserver                  0                   ff4e3a43baa7a       hello-node-connect-67bdd5bbb4-bmhxn
	ba7dd2f43e881       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              3 minutes ago        Exited              mount-munger                0                   7242bc7e82d74       busybox-mount
	bc3d7a7b23d7a       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago        Running             echoserver                  0                   8d183dbac2202       hello-node-6b9f76b5c7-vmcv5
	b116deecc263b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago        Running             coredns                     2                   bad95daac69bf       coredns-7c65d6cfc9-6gt9m
	aaa98d82a6c94       9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5                                                 3 minutes ago        Running             kindnet-cni                 2                   b54850960faff       kindnet-45nv9
	4274488ffba47       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                 3 minutes ago        Running             kube-proxy                  2                   12e57dbe4508f       kube-proxy-vxw66
	41ec2db9f4cca       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago        Running             storage-provisioner         3                   aebe265caf687       storage-provisioner
	e111d068f8e99       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                 3 minutes ago        Running             kube-apiserver              0                   a1d3b4273b9ac       kube-apiserver-functional-035676
	ae64eefe1bd91       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                 3 minutes ago        Running             kube-controller-manager     2                   a235fbc051f9a       kube-controller-manager-functional-035676
	b84b7138417d3       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                 3 minutes ago        Running             kube-scheduler              2                   52a7001c5af1c       kube-scheduler-functional-035676
	ddc20d49e0ed9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 3 minutes ago        Running             etcd                        2                   6edd92de1ef50       etcd-functional-035676
	8a3c4140cf5f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago        Exited              storage-provisioner         2                   aebe265caf687       storage-provisioner
	1805fde25b584       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 4 minutes ago        Exited              etcd                        1                   6edd92de1ef50       etcd-functional-035676
	9ee9570ac841a       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                 4 minutes ago        Exited              kube-scheduler              1                   52a7001c5af1c       kube-scheduler-functional-035676
	b3cef90390ff4       9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5                                                 4 minutes ago        Exited              kindnet-cni                 1                   b54850960faff       kindnet-45nv9
	4faf41cf07613       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                 4 minutes ago        Exited              kube-controller-manager     1                   a235fbc051f9a       kube-controller-manager-functional-035676
	2f5e074d44fce       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                 4 minutes ago        Exited              kube-proxy                  1                   12e57dbe4508f       kube-proxy-vxw66
	6ae7d01ebe58d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago        Exited              coredns                     1                   bad95daac69bf       coredns-7c65d6cfc9-6gt9m
	
	
	==> coredns [6ae7d01ebe58d2a0f7432d4c6ed1f50c27acd2e2676c403ae718f0b357ee67e0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37411 - 7093 "HINFO IN 8719584038132038936.8455700699231624629. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.097339807s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b116deecc263baf765b518b44722ae0fb50cca6f7669412b0e34b72bc09b66fe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33084 - 56432 "HINFO IN 3788741047440824265.7123424019254399322. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028469141s
	
	
	==> describe nodes <==
	Name:               functional-035676
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-035676
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=functional-035676
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_37_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:37:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-035676
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:42:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:41:22 +0000   Thu, 05 Dec 2024 20:37:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:41:22 +0000   Thu, 05 Dec 2024 20:37:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:41:22 +0000   Thu, 05 Dec 2024 20:37:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:41:22 +0000   Thu, 05 Dec 2024 20:38:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-035676
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 961f10607c9946829dd28b4e86637691
	  System UUID:                d72bcda0-23c8-41f2-89a2-742c96d43306
	  Boot ID:                    39024a98-8447-46b2-bbc5-7915429b9c2d
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-vmcv5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  default                     hello-node-connect-67bdd5bbb4-bmhxn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m56s
	  default                     mysql-6cdb49bbb-5j57z                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     2m48s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-7c65d6cfc9-6gt9m                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m34s
	  kube-system                 etcd-functional-035676                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m40s
	  kube-system                 kindnet-45nv9                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m35s
	  kube-system                 kube-apiserver-functional-035676             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 kube-controller-manager-functional-035676    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-proxy-vxw66                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-scheduler-functional-035676             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-gwx6g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m56s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-s6qrs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m33s                  kube-proxy       
	  Normal   Starting                 3m31s                  kube-proxy       
	  Normal   Starting                 4m4s                   kube-proxy       
	  Warning  CgroupV1                 4m40s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  4m40s                  kubelet          Node functional-035676 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m40s                  kubelet          Node functional-035676 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m40s                  kubelet          Node functional-035676 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m40s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m35s                  node-controller  Node functional-035676 event: Registered Node functional-035676 in Controller
	  Normal   NodeReady                4m21s                  kubelet          Node functional-035676 status is now: NodeReady
	  Normal   RegisteredNode           4m1s                   node-controller  Node functional-035676 event: Registered Node functional-035676 in Controller
	  Normal   Starting                 3m37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m37s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  3m36s (x8 over 3m37s)  kubelet          Node functional-035676 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m36s (x8 over 3m37s)  kubelet          Node functional-035676 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m36s (x7 over 3m37s)  kubelet          Node functional-035676 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m30s                  node-controller  Node functional-035676 event: Registered Node functional-035676 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 9e 58 22 0d b9 08 06
	[ +28.753910] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 78 7a 98 fe 25 08 06
	[  +1.292059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e 28 6f da 79 a6 08 06
	[  +0.021715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e c3 0d 92 91 5a 08 06
	[Dec 5 20:11] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 58 3b a6 8d 40 08 06
	[ +30.901947] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 3c 09 52 3d e1 08 06
	[  +1.444771] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 03 05 4c 3e 73 08 06
	[  +0.058589] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 48 98 e5 23 33 08 06
	[  +6.156143] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 10 f3 a9 91 d9 08 06
	[Dec 5 20:12] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 18 0d f3 3a 83 08 06
	[  +1.482986] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce c3 68 13 fd 23 08 06
	[  +0.033369] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 8a 70 ff f0 d7 08 06
	[  +6.306172] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca ef 8b ac b6 8f 08 06
	
	
	==> etcd [1805fde25b5848c6fbee0e59b0e8826032c98301c07d37103093fb4e001b083a] <==
	{"level":"info","ts":"2024-12-05T20:38:17.540439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-05T20:38:17.540481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-12-05T20:38:17.540497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-12-05T20:38:17.540503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-05T20:38:17.540521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-12-05T20:38:17.540529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-05T20:38:17.542141Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-035676 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T20:38:17.542146Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:38:17.542174Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:38:17.542447Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T20:38:17.542512Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T20:38:17.543243Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:38:17.543250Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:38:17.543991Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-12-05T20:38:17.544023Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T20:38:37.489157Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-12-05T20:38:37.489239Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-035676","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-12-05T20:38:37.489342Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-05T20:38:37.489473Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-05T20:38:37.503235Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-05T20:38:37.503283Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-05T20:38:37.503335Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-12-05T20:38:37.505996Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-05T20:38:37.506106Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-05T20:38:37.506117Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-035676","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [ddc20d49e0ed95a564b19bff620a2d7ec935819fb8b6db400816e956b8232b3c] <==
	{"level":"info","ts":"2024-12-05T20:38:47.711405Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:38:47.711442Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:38:47.713373Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-05T20:38:47.713644Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T20:38:47.713672Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T20:38:47.713785Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-05T20:38:47.713799Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-05T20:38:49.238960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-12-05T20:38:49.239041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-12-05T20:38:49.239083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-05T20:38:49.239102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-12-05T20:38:49.239110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-12-05T20:38:49.239124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-12-05T20:38:49.239143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-12-05T20:38:49.240209Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-035676 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T20:38:49.240229Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:38:49.240213Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:38:49.240415Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T20:38:49.240442Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T20:38:49.241473Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:38:49.241470Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:38:49.242599Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T20:38:49.242608Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-12-05T20:41:13.391941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.143558ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128033710888869502 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/functional-035676\" mod_revision:928 > success:<request_put:<key:\"/registry/leases/kube-node-lease/functional-035676\" value_size:498 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/functional-035676\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-05T20:41:13.392110Z","caller":"traceutil/trace.go:171","msg":"trace[563111025] transaction","detail":"{read_only:false; response_revision:937; number_of_response:1; }","duration":"152.274161ms","start":"2024-12-05T20:41:13.239812Z","end":"2024-12-05T20:41:13.392086Z","steps":["trace[563111025] 'process raft request'  (duration: 50.499892ms)","trace[563111025] 'compare'  (duration: 101.030052ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:42:23 up  3:24,  0 users,  load average: 0.41, 0.60, 1.43
	Linux functional-035676 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [aaa98d82a6c942d88b858a72ca3d2f2e9d780281bb646dd8496722726f625a09] <==
	I1205 20:40:21.834159       1 main.go:301] handling current node
	I1205 20:40:31.840986       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:40:31.841020       1 main.go:301] handling current node
	I1205 20:40:41.838588       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:40:41.838632       1 main.go:301] handling current node
	I1205 20:40:51.834492       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:40:51.834535       1 main.go:301] handling current node
	I1205 20:41:01.836981       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:41:01.837051       1 main.go:301] handling current node
	I1205 20:41:11.841019       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:41:11.841056       1 main.go:301] handling current node
	I1205 20:41:21.841021       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:41:21.841056       1 main.go:301] handling current node
	I1205 20:41:31.837983       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:41:31.838020       1 main.go:301] handling current node
	I1205 20:41:41.834239       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:41:41.834321       1 main.go:301] handling current node
	I1205 20:41:51.833927       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:41:51.833962       1 main.go:301] handling current node
	I1205 20:42:01.836992       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:42:01.837031       1 main.go:301] handling current node
	I1205 20:42:11.834050       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:42:11.834089       1 main.go:301] handling current node
	I1205 20:42:21.836382       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:42:21.836429       1 main.go:301] handling current node
	
	
	==> kindnet [b3cef90390ff49fd8351aafed3b65749941d6a34fda6b274180e73759d61797f] <==
	I1205 20:38:15.613171       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 20:38:15.613577       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1205 20:38:15.613817       1 main.go:148] setting mtu 1500 for CNI 
	I1205 20:38:15.613869       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 20:38:15.613925       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1205 20:38:16.109702       1 controller.go:361] Starting controller kube-network-policies
	I1205 20:38:16.109807       1 controller.go:365] Waiting for informer caches to sync
	I1205 20:38:16.109837       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1205 20:38:18.810175       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1205 20:38:18.810300       1 metrics.go:61] Registering metrics
	I1205 20:38:18.810394       1 controller.go:401] Syncing nftables rules
	I1205 20:38:26.109690       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:38:26.109783       1 main.go:301] handling current node
	I1205 20:38:36.112993       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:38:36.113058       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e111d068f8e99753925d8970918b85aa227d55203ab17d399fed5bb5b7d185fc] <==
	I1205 20:38:50.321692       1 shared_informer.go:320] Caches are synced for configmaps
	I1205 20:38:50.321745       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E1205 20:38:50.326860       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1205 20:38:50.327893       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1205 20:38:50.336951       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1205 20:38:50.342373       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1205 20:38:50.342400       1 policy_source.go:224] refreshing policies
	I1205 20:38:50.414108       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 20:38:51.225711       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 20:38:52.278805       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1205 20:38:52.417190       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1205 20:38:52.428222       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1205 20:38:52.484081       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 20:38:52.491298       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 20:38:53.719510       1 controller.go:615] quota admission added evaluator for: endpoints
	I1205 20:38:53.994549       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 20:39:09.009400       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.147.34"}
	I1205 20:39:13.100482       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1205 20:39:13.211130       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.249.187"}
	I1205 20:39:16.570384       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.64.231"}
	I1205 20:39:27.667157       1 controller.go:615] quota admission added evaluator for: namespaces
	I1205 20:39:27.857631       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.39.169"}
	I1205 20:39:27.925557       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.141.87"}
	I1205 20:39:27.933149       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.15.181"}
	I1205 20:39:35.789289       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.96.189.41"}
	
	
	==> kube-controller-manager [4faf41cf07613ae8c1ed3b30c8e0d348887154f789a063f4c40fd1872ff635ad] <==
	I1205 20:38:22.142883       1 shared_informer.go:320] Caches are synced for disruption
	I1205 20:38:22.143454       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="176.20695ms"
	I1205 20:38:22.143794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="183.444µs"
	I1205 20:38:22.147347       1 shared_informer.go:320] Caches are synced for TTL
	I1205 20:38:22.147389       1 shared_informer.go:320] Caches are synced for persistent volume
	I1205 20:38:22.148142       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1205 20:38:22.163050       1 shared_informer.go:320] Caches are synced for node
	I1205 20:38:22.163149       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1205 20:38:22.163205       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1205 20:38:22.163217       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1205 20:38:22.163231       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1205 20:38:22.163314       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-035676"
	I1205 20:38:22.169626       1 shared_informer.go:320] Caches are synced for stateful set
	I1205 20:38:22.176295       1 shared_informer.go:320] Caches are synced for taint
	I1205 20:38:22.176439       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 20:38:22.176532       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-035676"
	I1205 20:38:22.176586       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1205 20:38:22.184403       1 shared_informer.go:320] Caches are synced for daemon sets
	I1205 20:38:22.188662       1 shared_informer.go:320] Caches are synced for resource quota
	I1205 20:38:22.189515       1 shared_informer.go:320] Caches are synced for resource quota
	I1205 20:38:22.601618       1 shared_informer.go:320] Caches are synced for garbage collector
	I1205 20:38:22.683332       1 shared_informer.go:320] Caches are synced for garbage collector
	I1205 20:38:22.683369       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 20:38:23.135479       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.092736ms"
	I1205 20:38:23.135588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.03µs"
	
	
	==> kube-controller-manager [ae64eefe1bd91fae9c94c2af422d89ba8f58b168cde0a5f5d7fe9d5272faf59c] <==
	E1205 20:39:27.730927       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1205 20:39:27.750718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="13.976461ms"
	I1205 20:39:27.817432       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="66.660778ms"
	I1205 20:39:27.827300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="18.070047ms"
	I1205 20:39:27.841435       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="23.946263ms"
	I1205 20:39:27.841524       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="51.807µs"
	I1205 20:39:27.841580       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="13.365686ms"
	I1205 20:39:27.841626       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="31.739µs"
	I1205 20:39:27.852162       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="11.761857ms"
	I1205 20:39:27.919107       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="66.894224ms"
	I1205 20:39:27.919225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="65.371µs"
	I1205 20:39:29.065507       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="5.410446ms"
	I1205 20:39:29.065679       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="38.662µs"
	I1205 20:39:35.835722       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="10.452981ms"
	I1205 20:39:35.840719       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="4.949442ms"
	I1205 20:39:35.840824       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="56.44µs"
	I1205 20:39:35.845024       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="53.721µs"
	I1205 20:39:51.378362       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-035676"
	I1205 20:40:52.257498       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.961871ms"
	I1205 20:40:52.257615       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="72.954µs"
	I1205 20:40:57.267258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="5.574564ms"
	I1205 20:40:57.267660       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="75.736µs"
	I1205 20:41:22.944361       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-035676"
	I1205 20:41:27.329981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="78.958µs"
	I1205 20:41:40.833490       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="73.342µs"
	
	
	==> kube-proxy [2f5e074d44fce18e0adac8f102c1b4823db122b8da81ac8d228eebc95826cda6] <==
	I1205 20:38:15.710214       1 server_linux.go:66] "Using iptables proxy"
	I1205 20:38:18.724104       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1205 20:38:18.724294       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:38:18.937991       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 20:38:18.938133       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:38:18.940487       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:38:18.940946       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:38:18.941055       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:38:18.942400       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:38:18.942418       1 config.go:328] "Starting node config controller"
	I1205 20:38:18.942443       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:38:18.942443       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:38:18.942484       1 config.go:199] "Starting service config controller"
	I1205 20:38:18.942494       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:38:19.042740       1 shared_informer.go:320] Caches are synced for node config
	I1205 20:38:19.042798       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:38:19.042807       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [4274488ffba4763df8c7ae9bdbb13c4706d4c0523d77439fceca9fc45970edc5] <==
	I1205 20:38:51.344216       1 server_linux.go:66] "Using iptables proxy"
	I1205 20:38:51.509855       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1205 20:38:51.509959       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:38:51.533754       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 20:38:51.533832       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:38:51.535721       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:38:51.536128       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:38:51.536164       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:38:51.537619       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:38:51.537661       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:38:51.537711       1 config.go:199] "Starting service config controller"
	I1205 20:38:51.537765       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:38:51.537826       1 config.go:328] "Starting node config controller"
	I1205 20:38:51.537858       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:38:51.637885       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:38:51.637915       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:38:51.637975       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9ee9570ac841a3195747ff59f562c0831ba49a61e0f3d39c3bf13e32124f325b] <==
	I1205 20:38:16.578944       1 serving.go:386] Generated self-signed cert in-memory
	I1205 20:38:18.729009       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1205 20:38:18.729126       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:38:18.814838       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 20:38:18.814903       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 20:38:18.815043       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I1205 20:38:18.815068       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1205 20:38:18.815125       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1205 20:38:18.815518       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 20:38:18.816156       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1205 20:38:18.816195       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1205 20:38:18.915270       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 20:38:18.915561       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I1205 20:38:18.916760       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1205 20:38:37.488154       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1205 20:38:37.488349       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1205 20:38:37.488400       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1205 20:38:37.488433       1 requestheader_controller.go:186] Shutting down RequestHeaderAuthRequestController
	I1205 20:38:37.488619       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1205 20:38:37.489011       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b84b7138417d35cdc86f4f7460be805339b7a0e801d03953825a14ec38603bf9] <==
	I1205 20:38:48.249968       1 serving.go:386] Generated self-signed cert in-memory
	W1205 20:38:50.235516       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 20:38:50.235716       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 20:38:50.235796       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 20:38:50.235839       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 20:38:50.323052       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1205 20:38:50.323080       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:38:50.325515       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 20:38:50.325582       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 20:38:50.325756       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1205 20:38:50.325796       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 20:38:50.426514       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 20:41:06 functional-035676 kubelet[5316]: E1205 20:41:06.951085    5316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431266950849448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211194,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:16 functional-035676 kubelet[5316]: E1205 20:41:16.952442    5316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431276952216417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211194,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:16 functional-035676 kubelet[5316]: E1205 20:41:16.952486    5316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431276952216417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211194,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:26 functional-035676 kubelet[5316]: E1205 20:41:26.953810    5316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431286953632582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211194,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:26 functional-035676 kubelet[5316]: E1205 20:41:26.953851    5316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431286953632582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211194,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:27 functional-035676 kubelet[5316]: E1205 20:41:27.176301    5316 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 05 20:41:27 functional-035676 kubelet[5316]: E1205 20:41:27.176378    5316 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 05 20:41:27 functional-035676 kubelet[5316]: E1205 20:41:27.176691    5316 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7qx4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-6cdb49bbb-5j57z_default(dbfaf061-317a-4172-a255-e7dea91d9f24): ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 05 20:41:27 functional-035676 kubelet[5316]: E1205 20:41:27.178064    5316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-6cdb49bbb-5j57z" podUID="dbfaf061-317a-4172-a255-e7dea91d9f24"
	Dec 05 20:41:27 functional-035676 kubelet[5316]: E1205 20:41:27.318962    5316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-5j57z" podUID="dbfaf061-317a-4172-a255-e7dea91d9f24"
	Dec 05 20:41:36 functional-035676 kubelet[5316]: E1205 20:41:36.955193    5316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431296955018849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211194,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:36 functional-035676 kubelet[5316]: E1205 20:41:36.955237    5316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431296955018849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211194,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:46 functional-035676 kubelet[5316]: E1205 20:41:46.956508    5316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431306956334275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211194,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:46 functional-035676 kubelet[5316]: E1205 20:41:46.956549    5316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431306956334275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:211194,},InodesUsed:&UInt64Value{Value:106,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:56 functional-035676 kubelet[5316]: E1205 20:41:56.958159    5316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431316957961829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:56 functional-035676 kubelet[5316]: E1205 20:41:56.958242    5316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431316957961829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:57 functional-035676 kubelet[5316]: E1205 20:41:57.792851    5316 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 05 20:41:57 functional-035676 kubelet[5316]: E1205 20:41:57.792939    5316 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 05 20:41:57 functional-035676 kubelet[5316]: E1205 20:41:57.793217    5316 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sh5vf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-sv
c_default(58fc0c14-53ee-4fe5-8002-dc8e42dcec15): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 05 20:41:57 functional-035676 kubelet[5316]: E1205 20:41:57.794453    5316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="58fc0c14-53ee-4fe5-8002-dc8e42dcec15"
	Dec 05 20:42:06 functional-035676 kubelet[5316]: E1205 20:42:06.959799    5316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431326959579593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:42:06 functional-035676 kubelet[5316]: E1205 20:42:06.959851    5316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431326959579593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:42:07 functional-035676 kubelet[5316]: E1205 20:42:07.825127    5316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="58fc0c14-53ee-4fe5-8002-dc8e42dcec15"
	Dec 05 20:42:16 functional-035676 kubelet[5316]: E1205 20:42:16.961366    5316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431336961165086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:42:16 functional-035676 kubelet[5316]: E1205 20:42:16.961429    5316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431336961165086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [c8f01a50e6ca491bda86f844a9d299010f832db6d7dc81378a4107c75673af23] <==
	2024/12/05 20:40:56 Using namespace: kubernetes-dashboard
	2024/12/05 20:40:56 Using in-cluster config to connect to apiserver
	2024/12/05 20:40:56 Using secret token for csrf signing
	2024/12/05 20:40:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/12/05 20:40:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/12/05 20:40:56 Successful initial request to the apiserver, version: v1.31.2
	2024/12/05 20:40:56 Generating JWE encryption key
	2024/12/05 20:40:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/12/05 20:40:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/12/05 20:40:56 Initializing JWE encryption key from synchronized object
	2024/12/05 20:40:56 Creating in-cluster Sidecar client
	2024/12/05 20:40:56 Successful request to sidecar
	2024/12/05 20:40:56 Serving insecurely on HTTP port: 9090
	2024/12/05 20:40:56 Starting overwatch
	
	
	==> storage-provisioner [41ec2db9f4ccab9d12b138b4fc8ae29735f8d85bf0eb83cb7ad5d88120a23228] <==
	I1205 20:38:51.231989       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:38:51.242616       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:38:51.242657       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:39:08.706139       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:39:08.706272       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"822d98ee-13e4-4b2f-b177-255ce8da55ab", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-035676_f6917150-90d1-4609-9747-f5c2fa9f487a became leader
	I1205 20:39:08.706344       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-035676_f6917150-90d1-4609-9747-f5c2fa9f487a!
	I1205 20:39:08.807040       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-035676_f6917150-90d1-4609-9747-f5c2fa9f487a!
	I1205 20:39:21.335980       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1205 20:39:21.337051       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    e31d1d09-1c8c-456b-ac37-f8052cd19fee 348 0 2024-12-05 20:37:50 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-12-05 20:37:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1 695 0 2024-12-05 20:39:21 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-12-05 20:39:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-12-05 20:39:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1205 20:39:21.338654       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1205 20:39:21.339217       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1" provisioned
	I1205 20:39:21.339446       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1205 20:39:21.339541       1 volume_store.go:212] Trying to save persistentvolume "pvc-7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1"
	I1205 20:39:21.346823       1 volume_store.go:219] persistentvolume "pvc-7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1" saved
	I1205 20:39:21.347107       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1
	
	
	==> storage-provisioner [8a3c4140cf5f18325ff483bdf3e02297a23e981e6575f9be8a35102819cc6cdf] <==
	I1205 20:38:29.895430       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:38:29.910813       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:38:29.910858       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-035676 -n functional-035676
helpers_test.go:261: (dbg) Run:  kubectl --context functional-035676 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-5j57z nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-035676 describe pod busybox-mount mysql-6cdb49bbb-5j57z nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-035676 describe pod busybox-mount mysql-6cdb49bbb-5j57z nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-035676/192.168.49.2
	Start Time:       Thu, 05 Dec 2024 20:39:16 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://ba7dd2f43e881c293ef0184d1ba79aa5635b0dc9945ab0d4b51b8b6bbb95158f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 05 Dec 2024 20:39:17 +0000
	      Finished:     Thu, 05 Dec 2024 20:39:18 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nnz26 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-nnz26:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3m7s  default-scheduler  Successfully assigned default/busybox-mount to functional-035676
	  Normal  Pulling    3m8s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m7s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.218s (1.221s including waiting). Image size: 4631262 bytes.
	  Normal  Created    3m7s  kubelet            Created container mount-munger
	  Normal  Started    3m6s  kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-5j57z
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-035676/192.168.49.2
	Start Time:       Thu, 05 Dec 2024 20:39:35 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7qx4t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7qx4t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m48s                default-scheduler  Successfully assigned default/mysql-6cdb49bbb-5j57z to functional-035676
	  Warning  Failed     57s                  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     57s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    57s                  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     57s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    44s (x2 over 2m48s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-035676/192.168.49.2
	Start Time:       Thu, 05 Dec 2024 20:39:16 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sh5vf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sh5vf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m7s                 default-scheduler  Successfully assigned default/nginx-svc to functional-035676
	  Warning  Failed     2m35s                kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     27s (x2 over 2m35s)  kubelet            Error: ErrImagePull
	  Warning  Failed     27s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    17s (x2 over 2m35s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     17s (x2 over 2m35s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    6s (x3 over 3m8s)    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-035676/192.168.49.2
	Start Time:       Thu, 05 Dec 2024 20:39:21 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-66k9w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-66k9w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m2s                default-scheduler  Successfully assigned default/sp-pod to functional-035676
	  Warning  Failed     94s                 kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     94s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    94s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     94s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    82s (x2 over 3m3s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
E1205 20:42:37.672324  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.15s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-035676 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-5j57z" [dbfaf061-317a-4172-a255-e7dea91d9f24] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1205 20:39:53.830702  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
2024/12/05 20:41:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1799: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1799: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-035676 -n functional-035676
functional_test.go:1799: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2024-12-05 20:49:36.128739282 +0000 UTC m=+1509.936574076
functional_test.go:1799: (dbg) Run:  kubectl --context functional-035676 describe po mysql-6cdb49bbb-5j57z -n default
functional_test.go:1799: (dbg) kubectl --context functional-035676 describe po mysql-6cdb49bbb-5j57z -n default:
Name:             mysql-6cdb49bbb-5j57z
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-035676/192.168.49.2
Start Time:       Thu, 05 Dec 2024 20:39:35 +0000
Labels:           app=mysql
pod-template-hash=6cdb49bbb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-6cdb49bbb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7qx4t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7qx4t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-6cdb49bbb-5j57z to functional-035676
Normal   Pulling    3m38s (x4 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     2m18s (x4 over 8m9s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     2m18s (x4 over 8m9s)  kubelet            Error: ErrImagePull
Normal   BackOff    98s (x7 over 8m9s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     98s (x7 over 8m9s)    kubelet            Error: ImagePullBackOff
functional_test.go:1799: (dbg) Run:  kubectl --context functional-035676 logs mysql-6cdb49bbb-5j57z -n default
functional_test.go:1799: (dbg) Non-zero exit: kubectl --context functional-035676 logs mysql-6cdb49bbb-5j57z -n default: exit status 1 (70.006502ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-6cdb49bbb-5j57z" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test.go:1799: kubectl --context functional-035676 logs mysql-6cdb49bbb-5j57z -n default: exit status 1
functional_test.go:1801: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-035676
helpers_test.go:235: (dbg) docker inspect functional-035676:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e2affe68c424dcd2eed529d24f6868c08df93d16ca9f717c71a98b0545ef3ab7",
	        "Created": "2024-12-05T20:37:27.747322592Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 857131,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-05T20:37:27.865185441Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/e2affe68c424dcd2eed529d24f6868c08df93d16ca9f717c71a98b0545ef3ab7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e2affe68c424dcd2eed529d24f6868c08df93d16ca9f717c71a98b0545ef3ab7/hostname",
	        "HostsPath": "/var/lib/docker/containers/e2affe68c424dcd2eed529d24f6868c08df93d16ca9f717c71a98b0545ef3ab7/hosts",
	        "LogPath": "/var/lib/docker/containers/e2affe68c424dcd2eed529d24f6868c08df93d16ca9f717c71a98b0545ef3ab7/e2affe68c424dcd2eed529d24f6868c08df93d16ca9f717c71a98b0545ef3ab7-json.log",
	        "Name": "/functional-035676",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-035676:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-035676",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2da7b73e473ace45dc027636e6b2040736c41b8d0f04592aea75fcfc908de100-init/diff:/var/lib/docker/overlay2/0f5bc7fa09e0d0f29301db80becc3339e358e049d584dfb307a79bde49527770/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2da7b73e473ace45dc027636e6b2040736c41b8d0f04592aea75fcfc908de100/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2da7b73e473ace45dc027636e6b2040736c41b8d0f04592aea75fcfc908de100/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2da7b73e473ace45dc027636e6b2040736c41b8d0f04592aea75fcfc908de100/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-035676",
	                "Source": "/var/lib/docker/volumes/functional-035676/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-035676",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-035676",
	                "name.minikube.sigs.k8s.io": "functional-035676",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "76d3a2afa4994b5f0c602452ecfa7d9b636e228d4700a2725d3a9a82d57dd536",
	            "SandboxKey": "/var/run/docker/netns/76d3a2afa499",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-035676": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "731bc11d1bd3da4dc51139780fcf291dcd693b3a8e7700749619b288cdd87458",
	                    "EndpointID": "393b85eaf9f7f3f8f75b5df6a7afb2d2dd1075df885c9aa85f3e07acad4823bf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-035676",
	                        "e2affe68c424"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-035676 -n functional-035676
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-035676 logs -n 25: (1.44897415s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                    Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-035676 ssh sudo cat                                             | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | /etc/ssl/certs/51391683.0                                                  |                   |         |         |                     |                     |
	| image          | functional-035676 image load --daemon                                      | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | kicbase/echo-server:functional-035676                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-035676 ssh sudo cat                                             | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | /etc/ssl/certs/8303812.pem                                                 |                   |         |         |                     |                     |
	| image          | functional-035676 image ls                                                 | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	| ssh            | functional-035676 ssh sudo cat                                             | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | /usr/share/ca-certificates/8303812.pem                                     |                   |         |         |                     |                     |
	| ssh            | functional-035676 ssh sudo cat                                             | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                                  |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                         | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:41 UTC |
	|                | -p functional-035676                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                     |                   |         |         |                     |                     |
	| image          | functional-035676 image save kicbase/echo-server:functional-035676         | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-035676 image rm                                                 | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | kicbase/echo-server:functional-035676                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-035676 image ls                                                 | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	| image          | functional-035676 image load                                               | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| service        | functional-035676 service                                                  | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | hello-node-connect --url                                                   |                   |         |         |                     |                     |
	| addons         | functional-035676 addons list                                              | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	| addons         | functional-035676 addons list                                              | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | -o json                                                                    |                   |         |         |                     |                     |
	| ssh            | functional-035676 ssh sudo cat                                             | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:39 UTC | 05 Dec 24 20:39 UTC |
	|                | /etc/test/nested/copy/830381/hosts                                         |                   |         |         |                     |                     |
	| update-context | functional-035676                                                          | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-035676                                                          | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-035676                                                          | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| image          | functional-035676                                                          | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | image ls --format short                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-035676                                                          | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | image ls --format json                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-035676                                                          | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | image ls --format table                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-035676                                                          | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | image ls --format yaml                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-035676 ssh pgrep                                                | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC |                     |
	|                | buildkitd                                                                  |                   |         |         |                     |                     |
	| image          | functional-035676 image build -t                                           | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|                | localhost/my-image:functional-035676                                       |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                           |                   |         |         |                     |                     |
	| image          | functional-035676 image ls                                                 | functional-035676 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC | 05 Dec 24 20:41 UTC |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:39:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:39:24.385769  870931 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:39:24.385888  870931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:39:24.385894  870931 out.go:358] Setting ErrFile to fd 2...
	I1205 20:39:24.385898  870931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:39:24.386220  870931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
	I1205 20:39:24.386774  870931 out.go:352] Setting JSON to false
	I1205 20:39:24.387885  870931 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12113,"bootTime":1733419051,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:39:24.388009  870931 start.go:139] virtualization: kvm guest
	I1205 20:39:24.390178  870931 out.go:177] * [functional-035676] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1205 20:39:24.391705  870931 notify.go:220] Checking for updates...
	I1205 20:39:24.391712  870931 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:39:24.393211  870931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:39:24.394693  870931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig
	I1205 20:39:24.395973  870931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube
	I1205 20:39:24.397199  870931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:39:24.398443  870931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:39:24.400113  870931 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:39:24.400531  870931 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:39:24.422225  870931 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 20:39:24.422323  870931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:39:24.477630  870931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2024-12-05 20:39:24.467300148 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:39:24.477769  870931 docker.go:318] overlay module found
	I1205 20:39:24.480254  870931 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1205 20:39:24.481752  870931 start.go:297] selected driver: docker
	I1205 20:39:24.481768  870931 start.go:901] validating driver "docker" against &{Name:functional-035676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-035676 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:39:24.481862  870931 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:39:24.484129  870931 out.go:201] 
	W1205 20:39:24.485549  870931 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 20:39:24.486824  870931 out.go:201] 
	
	
	==> CRI-O <==
	Dec 05 20:48:25 functional-035676 crio[4905]: time="2024-12-05 20:48:25.825012502Z" level=info msg="Image docker.io/mysql:5.7 not found" id=1edb5a5b-49ce-4c6e-8043-6d7f06761a4b name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:48:39 functional-035676 crio[4905]: time="2024-12-05 20:48:39.824763712Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=fb277040-16f8-47bc-9ef6-46f49f777f01 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:48:39 functional-035676 crio[4905]: time="2024-12-05 20:48:39.824766934Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=b2d00f1e-984e-48fa-8cd4-075155009c6c name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:48:39 functional-035676 crio[4905]: time="2024-12-05 20:48:39.825103813Z" level=info msg="Image docker.io/nginx:alpine not found" id=fb277040-16f8-47bc-9ef6-46f49f777f01 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:48:39 functional-035676 crio[4905]: time="2024-12-05 20:48:39.825128221Z" level=info msg="Image docker.io/mysql:5.7 not found" id=b2d00f1e-984e-48fa-8cd4-075155009c6c name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:48:40 functional-035676 crio[4905]: time="2024-12-05 20:48:40.585591229Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=0f4a66b7-68a8-4c35-a085-72f2fdf42d92 name=/runtime.v1.ImageService/PullImage
	Dec 05 20:48:40 functional-035676 crio[4905]: time="2024-12-05 20:48:40.616163007Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Dec 05 20:48:50 functional-035676 crio[4905]: time="2024-12-05 20:48:50.824268889Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=2dbd03da-aeca-4ad3-ab0c-59188e70f7f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:48:50 functional-035676 crio[4905]: time="2024-12-05 20:48:50.824536789Z" level=info msg="Image docker.io/nginx:alpine not found" id=2dbd03da-aeca-4ad3-ab0c-59188e70f7f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:48:54 functional-035676 crio[4905]: time="2024-12-05 20:48:54.824031215Z" level=info msg="Checking image status: docker.io/nginx:latest" id=722d27ea-037a-439c-baf9-d94b5473e829 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:48:54 functional-035676 crio[4905]: time="2024-12-05 20:48:54.824332386Z" level=info msg="Image docker.io/nginx:latest not found" id=722d27ea-037a-439c-baf9-d94b5473e829 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:49:05 functional-035676 crio[4905]: time="2024-12-05 20:49:05.823992501Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=e436f783-d89a-4a2e-baa4-1c46ed6deeba name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:49:05 functional-035676 crio[4905]: time="2024-12-05 20:49:05.824236537Z" level=info msg="Image docker.io/nginx:alpine not found" id=e436f783-d89a-4a2e-baa4-1c46ed6deeba name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:49:07 functional-035676 crio[4905]: time="2024-12-05 20:49:07.824505847Z" level=info msg="Checking image status: docker.io/nginx:latest" id=c0b0e25c-7ba7-429e-b34f-591caba9303c name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:49:07 functional-035676 crio[4905]: time="2024-12-05 20:49:07.824827278Z" level=info msg="Image docker.io/nginx:latest not found" id=c0b0e25c-7ba7-429e-b34f-591caba9303c name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:49:17 functional-035676 crio[4905]: time="2024-12-05 20:49:17.825113609Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=34558297-b1ef-49e9-992b-8bef217b3019 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:49:17 functional-035676 crio[4905]: time="2024-12-05 20:49:17.825380233Z" level=info msg="Image docker.io/nginx:alpine not found" id=34558297-b1ef-49e9-992b-8bef217b3019 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:49:21 functional-035676 crio[4905]: time="2024-12-05 20:49:21.824232683Z" level=info msg="Checking image status: docker.io/nginx:latest" id=f61fb63e-b901-4beb-9dc2-83a970facbf7 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:49:21 functional-035676 crio[4905]: time="2024-12-05 20:49:21.824473908Z" level=info msg="Image docker.io/nginx:latest not found" id=f61fb63e-b901-4beb-9dc2-83a970facbf7 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:49:26 functional-035676 crio[4905]: time="2024-12-05 20:49:26.824777231Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=04077ee0-045b-4ae9-8da8-bb5b2f0d6973 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:49:26 functional-035676 crio[4905]: time="2024-12-05 20:49:26.825123142Z" level=info msg="Image docker.io/mysql:5.7 not found" id=04077ee0-045b-4ae9-8da8-bb5b2f0d6973 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:49:32 functional-035676 crio[4905]: time="2024-12-05 20:49:32.824716610Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=ca58a7d0-0712-4ba7-a7eb-25f7179b2130 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:49:32 functional-035676 crio[4905]: time="2024-12-05 20:49:32.825040463Z" level=info msg="Image docker.io/nginx:alpine not found" id=ca58a7d0-0712-4ba7-a7eb-25f7179b2130 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:49:35 functional-035676 crio[4905]: time="2024-12-05 20:49:35.824864222Z" level=info msg="Checking image status: docker.io/nginx:latest" id=d79f6856-64df-480d-8d0f-a7a7be3bbd40 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:49:35 functional-035676 crio[4905]: time="2024-12-05 20:49:35.825129938Z" level=info msg="Image docker.io/nginx:latest not found" id=d79f6856-64df-480d-8d0f-a7a7be3bbd40 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	c8f01a50e6ca4       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         8 minutes ago       Running             kubernetes-dashboard        0                   c0137ed4edec2       kubernetes-dashboard-695b96c756-s6qrs
	24b513dc663dc       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   8 minutes ago       Running             dashboard-metrics-scraper   0                   955f2725bf93b       dashboard-metrics-scraper-c5db448b4-gwx6g
	163d6ff8e1abf       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 10 minutes ago      Running             echoserver                  0                   ff4e3a43baa7a       hello-node-connect-67bdd5bbb4-bmhxn
	ba7dd2f43e881       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   7242bc7e82d74       busybox-mount
	bc3d7a7b23d7a       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   8d183dbac2202       hello-node-6b9f76b5c7-vmcv5
	b116deecc263b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 10 minutes ago      Running             coredns                     2                   bad95daac69bf       coredns-7c65d6cfc9-6gt9m
	aaa98d82a6c94       9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5                                                 10 minutes ago      Running             kindnet-cni                 2                   b54850960faff       kindnet-45nv9
	4274488ffba47       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                 10 minutes ago      Running             kube-proxy                  2                   12e57dbe4508f       kube-proxy-vxw66
	41ec2db9f4cca       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         3                   aebe265caf687       storage-provisioner
	e111d068f8e99       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                 10 minutes ago      Running             kube-apiserver              0                   a1d3b4273b9ac       kube-apiserver-functional-035676
	ae64eefe1bd91       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                 10 minutes ago      Running             kube-controller-manager     2                   a235fbc051f9a       kube-controller-manager-functional-035676
	b84b7138417d3       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                 10 minutes ago      Running             kube-scheduler              2                   52a7001c5af1c       kube-scheduler-functional-035676
	ddc20d49e0ed9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 10 minutes ago      Running             etcd                        2                   6edd92de1ef50       etcd-functional-035676
	8a3c4140cf5f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         2                   aebe265caf687       storage-provisioner
	1805fde25b584       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 11 minutes ago      Exited              etcd                        1                   6edd92de1ef50       etcd-functional-035676
	9ee9570ac841a       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                 11 minutes ago      Exited              kube-scheduler              1                   52a7001c5af1c       kube-scheduler-functional-035676
	b3cef90390ff4       9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5                                                 11 minutes ago      Exited              kindnet-cni                 1                   b54850960faff       kindnet-45nv9
	4faf41cf07613       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                 11 minutes ago      Exited              kube-controller-manager     1                   a235fbc051f9a       kube-controller-manager-functional-035676
	2f5e074d44fce       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                 11 minutes ago      Exited              kube-proxy                  1                   12e57dbe4508f       kube-proxy-vxw66
	6ae7d01ebe58d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 11 minutes ago      Exited              coredns                     1                   bad95daac69bf       coredns-7c65d6cfc9-6gt9m
	
	
	==> coredns [6ae7d01ebe58d2a0f7432d4c6ed1f50c27acd2e2676c403ae718f0b357ee67e0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37411 - 7093 "HINFO IN 8719584038132038936.8455700699231624629. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.097339807s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b116deecc263baf765b518b44722ae0fb50cca6f7669412b0e34b72bc09b66fe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33084 - 56432 "HINFO IN 3788741047440824265.7123424019254399322. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028469141s
	
	
	==> describe nodes <==
	Name:               functional-035676
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-035676
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=functional-035676
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_37_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:37:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-035676
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:49:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:47:29 +0000   Thu, 05 Dec 2024 20:37:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:47:29 +0000   Thu, 05 Dec 2024 20:37:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:47:29 +0000   Thu, 05 Dec 2024 20:37:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:47:29 +0000   Thu, 05 Dec 2024 20:38:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-035676
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 961f10607c9946829dd28b4e86637691
	  System UUID:                d72bcda0-23c8-41f2-89a2-742c96d43306
	  Boot ID:                    39024a98-8447-46b2-bbc5-7915429b9c2d
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-vmcv5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-67bdd5bbb4-bmhxn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-6cdb49bbb-5j57z                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-6gt9m                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-035676                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-45nv9                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-035676             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-035676    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-vxw66                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-035676             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-gwx6g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-s6qrs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node functional-035676 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node functional-035676 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node functional-035676 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           11m                node-controller  Node functional-035676 event: Registered Node functional-035676 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-035676 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-035676 event: Registered Node functional-035676 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-035676 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-035676 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-035676 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-035676 event: Registered Node functional-035676 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 9e 58 22 0d b9 08 06
	[ +28.753910] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 78 7a 98 fe 25 08 06
	[  +1.292059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e 28 6f da 79 a6 08 06
	[  +0.021715] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e c3 0d 92 91 5a 08 06
	[Dec 5 20:11] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 58 3b a6 8d 40 08 06
	[ +30.901947] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 3c 09 52 3d e1 08 06
	[  +1.444771] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 03 05 4c 3e 73 08 06
	[  +0.058589] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 48 98 e5 23 33 08 06
	[  +6.156143] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 10 f3 a9 91 d9 08 06
	[Dec 5 20:12] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 18 0d f3 3a 83 08 06
	[  +1.482986] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce c3 68 13 fd 23 08 06
	[  +0.033369] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 8a 70 ff f0 d7 08 06
	[  +6.306172] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca ef 8b ac b6 8f 08 06
	
	
	==> etcd [1805fde25b5848c6fbee0e59b0e8826032c98301c07d37103093fb4e001b083a] <==
	{"level":"info","ts":"2024-12-05T20:38:17.540439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-05T20:38:17.540481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-12-05T20:38:17.540497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-12-05T20:38:17.540503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-05T20:38:17.540521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-12-05T20:38:17.540529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-05T20:38:17.542141Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-035676 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T20:38:17.542146Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:38:17.542174Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:38:17.542447Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T20:38:17.542512Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T20:38:17.543243Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:38:17.543250Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:38:17.543991Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-12-05T20:38:17.544023Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T20:38:37.489157Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-12-05T20:38:37.489239Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-035676","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-12-05T20:38:37.489342Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-05T20:38:37.489473Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-05T20:38:37.503235Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-05T20:38:37.503283Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-05T20:38:37.503335Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-12-05T20:38:37.505996Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-05T20:38:37.506106Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-05T20:38:37.506117Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-035676","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [ddc20d49e0ed95a564b19bff620a2d7ec935819fb8b6db400816e956b8232b3c] <==
	{"level":"info","ts":"2024-12-05T20:38:47.713644Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T20:38:47.713672Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T20:38:47.713785Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-05T20:38:47.713799Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-05T20:38:49.238960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-12-05T20:38:49.239041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-12-05T20:38:49.239083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-05T20:38:49.239102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-12-05T20:38:49.239110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-12-05T20:38:49.239124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-12-05T20:38:49.239143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-12-05T20:38:49.240209Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-035676 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T20:38:49.240229Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:38:49.240213Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:38:49.240415Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T20:38:49.240442Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T20:38:49.241473Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:38:49.241470Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:38:49.242599Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T20:38:49.242608Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-12-05T20:41:13.391941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.143558ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128033710888869502 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/functional-035676\" mod_revision:928 > success:<request_put:<key:\"/registry/leases/kube-node-lease/functional-035676\" value_size:498 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/functional-035676\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-05T20:41:13.392110Z","caller":"traceutil/trace.go:171","msg":"trace[563111025] transaction","detail":"{read_only:false; response_revision:937; number_of_response:1; }","duration":"152.274161ms","start":"2024-12-05T20:41:13.239812Z","end":"2024-12-05T20:41:13.392086Z","steps":["trace[563111025] 'process raft request'  (duration: 50.499892ms)","trace[563111025] 'compare'  (duration: 101.030052ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T20:48:49.259792Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1094}
	{"level":"info","ts":"2024-12-05T20:48:49.280439Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1094,"took":"20.277479ms","hash":3498646136,"current-db-size-bytes":4186112,"current-db-size":"4.2 MB","current-db-size-in-use-bytes":1630208,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-05T20:48:49.280494Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3498646136,"revision":1094,"compact-revision":-1}
	
	
	==> kernel <==
	 20:49:37 up  3:32,  0 users,  load average: 0.13, 0.24, 0.94
	Linux functional-035676 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [aaa98d82a6c942d88b858a72ca3d2f2e9d780281bb646dd8496722726f625a09] <==
	I1205 20:47:31.834155       1 main.go:301] handling current node
	I1205 20:47:41.834131       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:47:41.834166       1 main.go:301] handling current node
	I1205 20:47:51.834494       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:47:51.834528       1 main.go:301] handling current node
	I1205 20:48:01.834385       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:48:01.834421       1 main.go:301] handling current node
	I1205 20:48:11.833973       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:48:11.834017       1 main.go:301] handling current node
	I1205 20:48:21.839065       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:48:21.839108       1 main.go:301] handling current node
	I1205 20:48:31.834065       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:48:31.834099       1 main.go:301] handling current node
	I1205 20:48:41.833774       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:48:41.833825       1 main.go:301] handling current node
	I1205 20:48:51.834586       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:48:51.834629       1 main.go:301] handling current node
	I1205 20:49:01.836981       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:49:01.837022       1 main.go:301] handling current node
	I1205 20:49:11.837069       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:49:11.837115       1 main.go:301] handling current node
	I1205 20:49:21.836977       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:49:21.837009       1 main.go:301] handling current node
	I1205 20:49:31.836988       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:49:31.837054       1 main.go:301] handling current node
	
	
	==> kindnet [b3cef90390ff49fd8351aafed3b65749941d6a34fda6b274180e73759d61797f] <==
	I1205 20:38:15.613171       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1205 20:38:15.613577       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1205 20:38:15.613817       1 main.go:148] setting mtu 1500 for CNI 
	I1205 20:38:15.613869       1 main.go:178] kindnetd IP family: "ipv4"
	I1205 20:38:15.613925       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1205 20:38:16.109702       1 controller.go:361] Starting controller kube-network-policies
	I1205 20:38:16.109807       1 controller.go:365] Waiting for informer caches to sync
	I1205 20:38:16.109837       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1205 20:38:18.810175       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1205 20:38:18.810300       1 metrics.go:61] Registering metrics
	I1205 20:38:18.810394       1 controller.go:401] Syncing nftables rules
	I1205 20:38:26.109690       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:38:26.109783       1 main.go:301] handling current node
	I1205 20:38:36.112993       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1205 20:38:36.113058       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e111d068f8e99753925d8970918b85aa227d55203ab17d399fed5bb5b7d185fc] <==
	I1205 20:38:50.321692       1 shared_informer.go:320] Caches are synced for configmaps
	I1205 20:38:50.321745       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E1205 20:38:50.326860       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1205 20:38:50.327893       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1205 20:38:50.336951       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1205 20:38:50.342373       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1205 20:38:50.342400       1 policy_source.go:224] refreshing policies
	I1205 20:38:50.414108       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 20:38:51.225711       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 20:38:52.278805       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1205 20:38:52.417190       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1205 20:38:52.428222       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1205 20:38:52.484081       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 20:38:52.491298       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 20:38:53.719510       1 controller.go:615] quota admission added evaluator for: endpoints
	I1205 20:38:53.994549       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 20:39:09.009400       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.147.34"}
	I1205 20:39:13.100482       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1205 20:39:13.211130       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.249.187"}
	I1205 20:39:16.570384       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.64.231"}
	I1205 20:39:27.667157       1 controller.go:615] quota admission added evaluator for: namespaces
	I1205 20:39:27.857631       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.39.169"}
	I1205 20:39:27.925557       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.141.87"}
	I1205 20:39:27.933149       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.15.181"}
	I1205 20:39:35.789289       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.96.189.41"}
	
	
	==> kube-controller-manager [4faf41cf07613ae8c1ed3b30c8e0d348887154f789a063f4c40fd1872ff635ad] <==
	I1205 20:38:22.142883       1 shared_informer.go:320] Caches are synced for disruption
	I1205 20:38:22.143454       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="176.20695ms"
	I1205 20:38:22.143794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="183.444µs"
	I1205 20:38:22.147347       1 shared_informer.go:320] Caches are synced for TTL
	I1205 20:38:22.147389       1 shared_informer.go:320] Caches are synced for persistent volume
	I1205 20:38:22.148142       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1205 20:38:22.163050       1 shared_informer.go:320] Caches are synced for node
	I1205 20:38:22.163149       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1205 20:38:22.163205       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1205 20:38:22.163217       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1205 20:38:22.163231       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1205 20:38:22.163314       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-035676"
	I1205 20:38:22.169626       1 shared_informer.go:320] Caches are synced for stateful set
	I1205 20:38:22.176295       1 shared_informer.go:320] Caches are synced for taint
	I1205 20:38:22.176439       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 20:38:22.176532       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-035676"
	I1205 20:38:22.176586       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1205 20:38:22.184403       1 shared_informer.go:320] Caches are synced for daemon sets
	I1205 20:38:22.188662       1 shared_informer.go:320] Caches are synced for resource quota
	I1205 20:38:22.189515       1 shared_informer.go:320] Caches are synced for resource quota
	I1205 20:38:22.601618       1 shared_informer.go:320] Caches are synced for garbage collector
	I1205 20:38:22.683332       1 shared_informer.go:320] Caches are synced for garbage collector
	I1205 20:38:22.683369       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 20:38:23.135479       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.092736ms"
	I1205 20:38:23.135588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.03µs"
	
	
	==> kube-controller-manager [ae64eefe1bd91fae9c94c2af422d89ba8f58b168cde0a5f5d7fe9d5272faf59c] <==
	I1205 20:39:27.919107       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="66.894224ms"
	I1205 20:39:27.919225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="65.371µs"
	I1205 20:39:29.065507       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="5.410446ms"
	I1205 20:39:29.065679       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="38.662µs"
	I1205 20:39:35.835722       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="10.452981ms"
	I1205 20:39:35.840719       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="4.949442ms"
	I1205 20:39:35.840824       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="56.44µs"
	I1205 20:39:35.845024       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="53.721µs"
	I1205 20:39:51.378362       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-035676"
	I1205 20:40:52.257498       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.961871ms"
	I1205 20:40:52.257615       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="72.954µs"
	I1205 20:40:57.267258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="5.574564ms"
	I1205 20:40:57.267660       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="75.736µs"
	I1205 20:41:22.944361       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-035676"
	I1205 20:41:27.329981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="78.958µs"
	I1205 20:41:40.833490       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="73.342µs"
	I1205 20:42:23.623580       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-035676"
	I1205 20:43:24.834136       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="121.017µs"
	I1205 20:43:37.834334       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="116.005µs"
	I1205 20:45:28.833863       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="109.287µs"
	I1205 20:45:43.833402       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="77.311µs"
	I1205 20:47:29.246970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-035676"
	I1205 20:47:31.833374       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="128.55µs"
	I1205 20:47:43.834051       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="136.628µs"
	I1205 20:49:26.834197       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="92.776µs"
	
	
	==> kube-proxy [2f5e074d44fce18e0adac8f102c1b4823db122b8da81ac8d228eebc95826cda6] <==
	I1205 20:38:15.710214       1 server_linux.go:66] "Using iptables proxy"
	I1205 20:38:18.724104       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1205 20:38:18.724294       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:38:18.937991       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 20:38:18.938133       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:38:18.940487       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:38:18.940946       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:38:18.941055       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:38:18.942400       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:38:18.942418       1 config.go:328] "Starting node config controller"
	I1205 20:38:18.942443       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:38:18.942443       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:38:18.942484       1 config.go:199] "Starting service config controller"
	I1205 20:38:18.942494       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:38:19.042740       1 shared_informer.go:320] Caches are synced for node config
	I1205 20:38:19.042798       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:38:19.042807       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [4274488ffba4763df8c7ae9bdbb13c4706d4c0523d77439fceca9fc45970edc5] <==
	I1205 20:38:51.344216       1 server_linux.go:66] "Using iptables proxy"
	I1205 20:38:51.509855       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1205 20:38:51.509959       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:38:51.533754       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 20:38:51.533832       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:38:51.535721       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:38:51.536128       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:38:51.536164       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:38:51.537619       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:38:51.537661       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:38:51.537711       1 config.go:199] "Starting service config controller"
	I1205 20:38:51.537765       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:38:51.537826       1 config.go:328] "Starting node config controller"
	I1205 20:38:51.537858       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:38:51.637885       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:38:51.637915       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:38:51.637975       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9ee9570ac841a3195747ff59f562c0831ba49a61e0f3d39c3bf13e32124f325b] <==
	I1205 20:38:16.578944       1 serving.go:386] Generated self-signed cert in-memory
	I1205 20:38:18.729009       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1205 20:38:18.729126       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:38:18.814838       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 20:38:18.814903       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 20:38:18.815043       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I1205 20:38:18.815068       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1205 20:38:18.815125       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1205 20:38:18.815518       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 20:38:18.816156       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1205 20:38:18.816195       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1205 20:38:18.915270       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 20:38:18.915561       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I1205 20:38:18.916760       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1205 20:38:37.488154       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1205 20:38:37.488349       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1205 20:38:37.488400       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1205 20:38:37.488433       1 requestheader_controller.go:186] Shutting down RequestHeaderAuthRequestController
	I1205 20:38:37.488619       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1205 20:38:37.489011       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b84b7138417d35cdc86f4f7460be805339b7a0e801d03953825a14ec38603bf9] <==
	I1205 20:38:48.249968       1 serving.go:386] Generated self-signed cert in-memory
	W1205 20:38:50.235516       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 20:38:50.235716       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 20:38:50.235796       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 20:38:50.235839       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 20:38:50.323052       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1205 20:38:50.323080       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:38:50.325515       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 20:38:50.325582       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 20:38:50.325756       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1205 20:38:50.325796       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 20:38:50.426514       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 20:48:47 functional-035676 kubelet[5316]: E1205 20:48:47.024659    5316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431727024462769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:48:47 functional-035676 kubelet[5316]: E1205 20:48:47.024698    5316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431727024462769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:48:50 functional-035676 kubelet[5316]: E1205 20:48:50.824777    5316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="58fc0c14-53ee-4fe5-8002-dc8e42dcec15"
	Dec 05 20:48:54 functional-035676 kubelet[5316]: E1205 20:48:54.824571    5316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="8105cd9e-6de0-45ef-bdae-b7bee83bd8d0"
	Dec 05 20:48:57 functional-035676 kubelet[5316]: E1205 20:48:57.026224    5316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431737026053670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:48:57 functional-035676 kubelet[5316]: E1205 20:48:57.026270    5316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431737026053670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:49:05 functional-035676 kubelet[5316]: E1205 20:49:05.824472    5316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="58fc0c14-53ee-4fe5-8002-dc8e42dcec15"
	Dec 05 20:49:07 functional-035676 kubelet[5316]: E1205 20:49:07.029471    5316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431747029250993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:49:07 functional-035676 kubelet[5316]: E1205 20:49:07.029514    5316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431747029250993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:49:07 functional-035676 kubelet[5316]: E1205 20:49:07.825198    5316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="8105cd9e-6de0-45ef-bdae-b7bee83bd8d0"
	Dec 05 20:49:11 functional-035676 kubelet[5316]: E1205 20:49:11.224242    5316 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 05 20:49:11 functional-035676 kubelet[5316]: E1205 20:49:11.224315    5316 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 05 20:49:11 functional-035676 kubelet[5316]: E1205 20:49:11.224460    5316 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7qx4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-6cdb49bbb-5j57z_default(dbfaf061-317a-4172-a255-e7dea91d9f24): ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 05 20:49:11 functional-035676 kubelet[5316]: E1205 20:49:11.225635    5316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-6cdb49bbb-5j57z" podUID="dbfaf061-317a-4172-a255-e7dea91d9f24"
	Dec 05 20:49:17 functional-035676 kubelet[5316]: E1205 20:49:17.031053    5316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431757030886598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:49:17 functional-035676 kubelet[5316]: E1205 20:49:17.031086    5316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431757030886598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:49:17 functional-035676 kubelet[5316]: E1205 20:49:17.825693    5316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="58fc0c14-53ee-4fe5-8002-dc8e42dcec15"
	Dec 05 20:49:21 functional-035676 kubelet[5316]: E1205 20:49:21.824812    5316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="8105cd9e-6de0-45ef-bdae-b7bee83bd8d0"
	Dec 05 20:49:26 functional-035676 kubelet[5316]: E1205 20:49:26.825423    5316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-5j57z" podUID="dbfaf061-317a-4172-a255-e7dea91d9f24"
	Dec 05 20:49:27 functional-035676 kubelet[5316]: E1205 20:49:27.032443    5316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431767032239808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:49:27 functional-035676 kubelet[5316]: E1205 20:49:27.032479    5316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431767032239808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:49:32 functional-035676 kubelet[5316]: E1205 20:49:32.825327    5316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="58fc0c14-53ee-4fe5-8002-dc8e42dcec15"
	Dec 05 20:49:35 functional-035676 kubelet[5316]: E1205 20:49:35.825470    5316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="8105cd9e-6de0-45ef-bdae-b7bee83bd8d0"
	Dec 05 20:49:37 functional-035676 kubelet[5316]: E1205 20:49:37.033878    5316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431777033688605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:49:37 functional-035676 kubelet[5316]: E1205 20:49:37.033920    5316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431777033688605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [c8f01a50e6ca491bda86f844a9d299010f832db6d7dc81378a4107c75673af23] <==
	2024/12/05 20:40:56 Starting overwatch
	2024/12/05 20:40:56 Using namespace: kubernetes-dashboard
	2024/12/05 20:40:56 Using in-cluster config to connect to apiserver
	2024/12/05 20:40:56 Using secret token for csrf signing
	2024/12/05 20:40:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/12/05 20:40:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/12/05 20:40:56 Successful initial request to the apiserver, version: v1.31.2
	2024/12/05 20:40:56 Generating JWE encryption key
	2024/12/05 20:40:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/12/05 20:40:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/12/05 20:40:56 Initializing JWE encryption key from synchronized object
	2024/12/05 20:40:56 Creating in-cluster Sidecar client
	2024/12/05 20:40:56 Successful request to sidecar
	2024/12/05 20:40:56 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [41ec2db9f4ccab9d12b138b4fc8ae29735f8d85bf0eb83cb7ad5d88120a23228] <==
	I1205 20:38:51.231989       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:38:51.242616       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:38:51.242657       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:39:08.706139       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:39:08.706272       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"822d98ee-13e4-4b2f-b177-255ce8da55ab", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-035676_f6917150-90d1-4609-9747-f5c2fa9f487a became leader
	I1205 20:39:08.706344       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-035676_f6917150-90d1-4609-9747-f5c2fa9f487a!
	I1205 20:39:08.807040       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-035676_f6917150-90d1-4609-9747-f5c2fa9f487a!
	I1205 20:39:21.335980       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1205 20:39:21.337051       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    e31d1d09-1c8c-456b-ac37-f8052cd19fee 348 0 2024-12-05 20:37:50 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-12-05 20:37:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1 695 0 2024-12-05 20:39:21 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-12-05 20:39:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-12-05 20:39:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1205 20:39:21.338654       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1205 20:39:21.339217       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1" provisioned
	I1205 20:39:21.339446       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1205 20:39:21.339541       1 volume_store.go:212] Trying to save persistentvolume "pvc-7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1"
	I1205 20:39:21.346823       1 volume_store.go:219] persistentvolume "pvc-7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1" saved
	I1205 20:39:21.347107       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-7fa0edaa-2079-4ff6-97a5-5a3b4cc1b8b1
	
	
	==> storage-provisioner [8a3c4140cf5f18325ff483bdf3e02297a23e981e6575f9be8a35102819cc6cdf] <==
	I1205 20:38:29.895430       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:38:29.910813       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:38:29.910858       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-035676 -n functional-035676
helpers_test.go:261: (dbg) Run:  kubectl --context functional-035676 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-5j57z nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-035676 describe pod busybox-mount mysql-6cdb49bbb-5j57z nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-035676 describe pod busybox-mount mysql-6cdb49bbb-5j57z nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-035676/192.168.49.2
	Start Time:       Thu, 05 Dec 2024 20:39:16 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://ba7dd2f43e881c293ef0184d1ba79aa5635b0dc9945ab0d4b51b8b6bbb95158f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 05 Dec 2024 20:39:17 +0000
	      Finished:     Thu, 05 Dec 2024 20:39:18 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nnz26 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-nnz26:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-035676
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.218s (1.221s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-5j57z
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-035676/192.168.49.2
	Start Time:       Thu, 05 Dec 2024 20:39:35 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7qx4t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7qx4t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-6cdb49bbb-5j57z to functional-035676
	  Normal   Pulling    3m40s (x4 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m20s (x4 over 8m11s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m20s (x4 over 8m11s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    100s (x7 over 8m11s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     100s (x7 over 8m11s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-035676/192.168.49.2
	Start Time:       Thu, 05 Dec 2024 20:39:16 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sh5vf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sh5vf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/nginx-svc to functional-035676
	  Warning  Failed     9m49s                  kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m3s (x4 over 10m)     kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m52s (x4 over 9m49s)  kubelet            Error: ErrImagePull
	  Warning  Failed     3m52s (x3 over 7m41s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m25s (x7 over 9m49s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    21s (x17 over 9m49s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-035676/192.168.49.2
	Start Time:       Thu, 05 Dec 2024 20:39:21 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-66k9w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-66k9w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/sp-pod to functional-035676
	  Normal   Pulling    4m12s (x4 over 10m)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m51s (x4 over 8m48s)  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m51s (x4 over 8m48s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m6s (x7 over 8m48s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3s (x13 over 8m48s)    kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.83s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-035676 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [58fc0c14-53ee-4fe5-8002-dc8e42dcec15] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-035676 -n functional-035676
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2024-12-05 20:43:16.873572931 +0000 UTC m=+1130.681407734
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-035676 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-035676 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-035676/192.168.49.2
Start Time:       Thu, 05 Dec 2024 20:39:16 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:  10.244.0.6
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sh5vf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-sh5vf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-035676
Warning  Failed     3m27s                kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     79s (x2 over 3m27s)  kubelet            Error: ErrImagePull
Warning  Failed     79s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    69s (x2 over 3m27s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     69s (x2 over 3m27s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    58s (x3 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-035676 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-035676 logs nginx-svc -n default: exit status 1 (63.666264ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-035676 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Non-zero exit: docker pull kicbase/echo-server:1.0: exit status 1 (422.782929ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:344: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 image load --daemon kicbase/echo-server:functional-035676 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-035676" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 image load --daemon kicbase/echo-server:functional-035676 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-035676" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Non-zero exit: docker pull kicbase/echo-server:latest: exit status 1 (425.183047ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:237: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 image save kicbase/echo-server:functional-035676 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:411: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1205 20:39:27.605141  872579 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:39:27.606060  872579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:39:27.606080  872579 out.go:358] Setting ErrFile to fd 2...
	I1205 20:39:27.606093  872579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:39:27.606384  872579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
	I1205 20:39:27.607344  872579 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:39:27.607509  872579 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:39:27.608139  872579 cli_runner.go:164] Run: docker container inspect functional-035676 --format={{.State.Status}}
	I1205 20:39:27.629012  872579 ssh_runner.go:195] Run: systemctl --version
	I1205 20:39:27.629063  872579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035676
	I1205 20:39:27.647674  872579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/functional-035676/id_rsa Username:docker}
	I1205 20:39:27.742569  872579 cache_images.go:289] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1205 20:39:27.742646  872579 cache_images.go:253] Failed to load cached images for "functional-035676": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1205 20:39:27.742670  872579 cache_images.go:265] failed pushing to: functional-035676

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-035676
functional_test.go:419: (dbg) Non-zero exit: docker rmi kicbase/echo-server:functional-035676: exit status 1 (17.476699ms)

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-035676

                                                
                                                
** /stderr **
functional_test.go:421: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-035676

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (83.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1205 20:43:17.003015  830381 retry.go:31] will retry after 3.627811072s: Temporary Error: Get "http:": http: no Host in request URL
I1205 20:43:20.631082  830381 retry.go:31] will retry after 3.06751474s: Temporary Error: Get "http:": http: no Host in request URL
I1205 20:43:23.699163  830381 retry.go:31] will retry after 5.274602168s: Temporary Error: Get "http:": http: no Host in request URL
I1205 20:43:28.974690  830381 retry.go:31] will retry after 13.784457597s: Temporary Error: Get "http:": http: no Host in request URL
I1205 20:43:42.760162  830381 retry.go:31] will retry after 10.95458127s: Temporary Error: Get "http:": http: no Host in request URL
I1205 20:43:53.715029  830381 retry.go:31] will retry after 25.591296205s: Temporary Error: Get "http:": http: no Host in request URL
I1205 20:44:19.306787  830381 retry.go:31] will retry after 20.697487567s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-035676 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.103.64.231   10.103.64.231   80:30526/TCP   5m24s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (83.06s)

                                                
                                    

Test pass (290/329)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.67
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.2/json-events 4.66
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.22
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1.11
21 TestBinaryMirror 0.79
22 TestOffline 59.46
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 149.56
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 9.49
35 TestAddons/parallel/Registry 14.64
37 TestAddons/parallel/InspektorGadget 10.65
40 TestAddons/parallel/CSI 60.41
41 TestAddons/parallel/Headlamp 17.44
42 TestAddons/parallel/CloudSpanner 5.5
43 TestAddons/parallel/LocalPath 50.94
44 TestAddons/parallel/NvidiaDevicePlugin 5.48
45 TestAddons/parallel/Yakd 11.99
46 TestAddons/parallel/AmdGpuDevicePlugin 6.49
47 TestAddons/StoppedEnableDisable 12.12
48 TestCertOptions 26.43
49 TestCertExpiration 226.36
51 TestForceSystemdFlag 26.34
52 TestForceSystemdEnv 37.07
54 TestKVMDriverInstallOrUpdate 3.43
58 TestErrorSpam/setup 23.7
59 TestErrorSpam/start 0.59
60 TestErrorSpam/status 0.88
61 TestErrorSpam/pause 1.52
62 TestErrorSpam/unpause 1.71
63 TestErrorSpam/stop 1.38
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 43.05
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 23.63
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.26
75 TestFunctional/serial/CacheCmd/cache/add_local 1.34
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 29.72
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.4
86 TestFunctional/serial/LogsFileCmd 1.43
87 TestFunctional/serial/InvalidService 4.17
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 141.28
91 TestFunctional/parallel/DryRun 0.37
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 1.22
97 TestFunctional/parallel/ServiceCmdConnect 7.51
98 TestFunctional/parallel/AddonsCmd 0.14
101 TestFunctional/parallel/SSHCmd 0.63
102 TestFunctional/parallel/CpCmd 2.12
104 TestFunctional/parallel/FileSync 0.25
105 TestFunctional/parallel/CertSync 1.54
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
113 TestFunctional/parallel/License 0.2
114 TestFunctional/parallel/ServiceCmd/DeployApp 9.21
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
116 TestFunctional/parallel/ProfileCmd/profile_list 0.63
117 TestFunctional/parallel/MountCmd/any-port 6.85
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.67
120 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/MountCmd/specific-port 1.96
125 TestFunctional/parallel/ServiceCmd/List 0.35
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
128 TestFunctional/parallel/ServiceCmd/Format 0.42
129 TestFunctional/parallel/MountCmd/VerifyCleanup 1.52
130 TestFunctional/parallel/ServiceCmd/URL 0.33
131 TestFunctional/parallel/Version/short 0.06
132 TestFunctional/parallel/Version/components 0.49
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
137 TestFunctional/parallel/ImageCommands/ImageBuild 2.08
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 105.43
161 TestMultiControlPlane/serial/DeployApp 5.28
162 TestMultiControlPlane/serial/PingHostFromPods 1.08
163 TestMultiControlPlane/serial/AddWorkerNode 32.54
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
166 TestMultiControlPlane/serial/CopyFile 16.05
167 TestMultiControlPlane/serial/StopSecondaryNode 12.52
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
169 TestMultiControlPlane/serial/RestartSecondaryNode 22.97
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.1
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 197.43
172 TestMultiControlPlane/serial/DeleteSecondaryNode 11.49
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
174 TestMultiControlPlane/serial/StopCluster 35.68
175 TestMultiControlPlane/serial/RestartCluster 106.16
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
177 TestMultiControlPlane/serial/AddSecondaryNode 39.9
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
182 TestJSONOutput/start/Command 40.28
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.7
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.61
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 5.77
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.22
207 TestKicCustomNetwork/create_custom_network 29.52
208 TestKicCustomNetwork/use_default_bridge_network 26.82
209 TestKicExistingNetwork 22.9
210 TestKicCustomSubnet 24.31
211 TestKicStaticIP 23.63
212 TestMainNoArgs 0.05
213 TestMinikubeProfile 50.32
216 TestMountStart/serial/StartWithMountFirst 5.6
217 TestMountStart/serial/VerifyMountFirst 0.25
218 TestMountStart/serial/StartWithMountSecond 8.25
219 TestMountStart/serial/VerifyMountSecond 0.25
220 TestMountStart/serial/DeleteFirst 1.61
221 TestMountStart/serial/VerifyMountPostDelete 0.24
222 TestMountStart/serial/Stop 1.18
223 TestMountStart/serial/RestartStopped 7.16
224 TestMountStart/serial/VerifyMountPostStop 0.25
227 TestMultiNode/serial/FreshStart2Nodes 72.27
228 TestMultiNode/serial/DeployApp2Nodes 3.96
229 TestMultiNode/serial/PingHostFrom2Pods 0.76
230 TestMultiNode/serial/AddNode 28.83
231 TestMultiNode/serial/MultiNodeLabels 0.06
232 TestMultiNode/serial/ProfileList 0.62
233 TestMultiNode/serial/CopyFile 9.18
234 TestMultiNode/serial/StopNode 2.12
235 TestMultiNode/serial/StartAfterStop 9.46
236 TestMultiNode/serial/RestartKeepsNodes 102.07
237 TestMultiNode/serial/DeleteNode 5.31
238 TestMultiNode/serial/StopMultiNode 23.75
239 TestMultiNode/serial/RestartMultiNode 49.49
240 TestMultiNode/serial/ValidateNameConflict 25.49
245 TestPreload 109.49
247 TestScheduledStopUnix 98.18
250 TestInsufficientStorage 13.13
251 TestRunningBinaryUpgrade 64.13
253 TestKubernetesUpgrade 354.87
254 TestMissingContainerUpgrade 143.44
258 TestStoppedBinaryUpgrade/Setup 0.84
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
265 TestNoKubernetes/serial/StartWithK8s 35.12
266 TestStoppedBinaryUpgrade/Upgrade 99.93
267 TestNoKubernetes/serial/StartWithStopK8s 13.52
268 TestNoKubernetes/serial/Start 8.18
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
270 TestNoKubernetes/serial/ProfileList 5.06
272 TestPause/serial/Start 48.89
273 TestNoKubernetes/serial/Stop 2.88
274 TestNoKubernetes/serial/StartNoArgs 6.62
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
276 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
277 TestPause/serial/SecondStartNoReconfiguration 30.34
278 TestPause/serial/Pause 0.93
279 TestPause/serial/VerifyStatus 0.37
280 TestPause/serial/Unpause 1.06
281 TestPause/serial/PauseAgain 0.98
282 TestPause/serial/DeletePaused 2.88
286 TestPause/serial/VerifyDeletedResources 15.5
291 TestNetworkPlugins/group/false 4.58
296 TestStartStop/group/old-k8s-version/serial/FirstStart 131.47
298 TestStartStop/group/no-preload/serial/FirstStart 55.54
299 TestStartStop/group/no-preload/serial/DeployApp 8.33
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
301 TestStartStop/group/no-preload/serial/Stop 11.86
302 TestStartStop/group/old-k8s-version/serial/DeployApp 9.46
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
304 TestStartStop/group/no-preload/serial/SecondStart 299.98
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.95
306 TestStartStop/group/old-k8s-version/serial/Stop 13.03
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
308 TestStartStop/group/old-k8s-version/serial/SecondStart 145.12
310 TestStartStop/group/embed-certs/serial/FirstStart 41.89
311 TestStartStop/group/embed-certs/serial/DeployApp 8.25
312 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.86
313 TestStartStop/group/embed-certs/serial/Stop 13.8
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.1
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
317 TestStartStop/group/embed-certs/serial/SecondStart 263.2
318 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.27
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
320 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.93
321 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
324 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 275.14
325 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
326 TestStartStop/group/old-k8s-version/serial/Pause 2.65
328 TestStartStop/group/newest-cni/serial/FirstStart 30.81
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.85
331 TestStartStop/group/newest-cni/serial/Stop 2.09
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
333 TestStartStop/group/newest-cni/serial/SecondStart 13.12
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
337 TestStartStop/group/newest-cni/serial/Pause 2.64
338 TestNetworkPlugins/group/auto/Start 42.63
339 TestNetworkPlugins/group/auto/KubeletFlags 0.27
340 TestNetworkPlugins/group/auto/NetCatPod 9.19
341 TestNetworkPlugins/group/auto/DNS 0.14
342 TestNetworkPlugins/group/auto/Localhost 0.12
343 TestNetworkPlugins/group/auto/HairPin 0.11
344 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
345 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
346 TestNetworkPlugins/group/kindnet/Start 47.46
347 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
348 TestStartStop/group/no-preload/serial/Pause 2.97
349 TestNetworkPlugins/group/flannel/Start 49.17
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
352 TestNetworkPlugins/group/kindnet/NetCatPod 10.18
353 TestNetworkPlugins/group/flannel/ControllerPod 6.01
354 TestNetworkPlugins/group/kindnet/DNS 0.14
355 TestNetworkPlugins/group/kindnet/Localhost 0.12
356 TestNetworkPlugins/group/kindnet/HairPin 0.13
357 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
358 TestNetworkPlugins/group/flannel/NetCatPod 10.18
359 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
360 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
361 TestNetworkPlugins/group/flannel/DNS 0.14
362 TestNetworkPlugins/group/flannel/Localhost 0.11
363 TestNetworkPlugins/group/flannel/HairPin 0.13
364 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
365 TestStartStop/group/embed-certs/serial/Pause 3.07
366 TestNetworkPlugins/group/enable-default-cni/Start 36.49
367 TestNetworkPlugins/group/bridge/Start 41.64
368 TestNetworkPlugins/group/calico/Start 59.32
369 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
370 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.21
371 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
372 TestNetworkPlugins/group/bridge/NetCatPod 10.18
373 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
374 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
375 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
376 TestNetworkPlugins/group/bridge/DNS 0.2
377 TestNetworkPlugins/group/bridge/Localhost 0.16
378 TestNetworkPlugins/group/bridge/HairPin 0.17
379 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.12
380 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
381 TestNetworkPlugins/group/custom-flannel/Start 52.06
382 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
383 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.35
384 TestNetworkPlugins/group/calico/ControllerPod 6.01
385 TestNetworkPlugins/group/calico/KubeletFlags 0.27
386 TestNetworkPlugins/group/calico/NetCatPod 9.18
387 TestNetworkPlugins/group/calico/DNS 0.14
388 TestNetworkPlugins/group/calico/Localhost 0.12
389 TestNetworkPlugins/group/calico/HairPin 0.13
390 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
391 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.18
392 TestNetworkPlugins/group/custom-flannel/DNS 0.13
393 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
394 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (5.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-350205 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-350205 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.666719961s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1205 20:24:31.900716  830381 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1205 20:24:31.900852  830381 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-823623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-350205
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-350205: exit status 85 (69.643103ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-350205 | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC |          |
	|         | -p download-only-350205        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:24:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:24:26.279602  830393 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:24:26.279717  830393 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:24:26.279722  830393 out.go:358] Setting ErrFile to fd 2...
	I1205 20:24:26.279727  830393 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:24:26.279909  830393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
	W1205 20:24:26.280070  830393 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20053-823623/.minikube/config/config.json: open /home/jenkins/minikube-integration/20053-823623/.minikube/config/config.json: no such file or directory
	I1205 20:24:26.280678  830393 out.go:352] Setting JSON to true
	I1205 20:24:26.281741  830393 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11215,"bootTime":1733419051,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:24:26.281868  830393 start.go:139] virtualization: kvm guest
	I1205 20:24:26.284598  830393 out.go:97] [download-only-350205] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1205 20:24:26.284751  830393 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20053-823623/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 20:24:26.284812  830393 notify.go:220] Checking for updates...
	I1205 20:24:26.286480  830393 out.go:169] MINIKUBE_LOCATION=20053
	I1205 20:24:26.288202  830393 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:24:26.289594  830393 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig
	I1205 20:24:26.291136  830393 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube
	I1205 20:24:26.292454  830393 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 20:24:26.294949  830393 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 20:24:26.295177  830393 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:24:26.318231  830393 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 20:24:26.318307  830393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:24:26.365753  830393 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2024-12-05 20:24:26.355610185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:24:26.365872  830393 docker.go:318] overlay module found
	I1205 20:24:26.367696  830393 out.go:97] Using the docker driver based on user configuration
	I1205 20:24:26.367721  830393 start.go:297] selected driver: docker
	I1205 20:24:26.367728  830393 start.go:901] validating driver "docker" against <nil>
	I1205 20:24:26.367817  830393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:24:26.416590  830393 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2024-12-05 20:24:26.407398975 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:24:26.416766  830393 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:24:26.417457  830393 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1205 20:24:26.417611  830393 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 20:24:26.419602  830393 out.go:169] Using Docker driver with root privileges
	I1205 20:24:26.421028  830393 cni.go:84] Creating CNI manager for ""
	I1205 20:24:26.421099  830393 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 20:24:26.421112  830393 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 20:24:26.421197  830393 start.go:340] cluster config:
	{Name:download-only-350205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-350205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:24:26.422729  830393 out.go:97] Starting "download-only-350205" primary control-plane node in "download-only-350205" cluster
	I1205 20:24:26.422759  830393 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 20:24:26.424066  830393 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1205 20:24:26.424100  830393 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:24:26.424220  830393 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1205 20:24:26.441092  830393 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1205 20:24:26.441320  830393 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1205 20:24:26.441418  830393 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1205 20:24:26.457337  830393 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 20:24:26.457366  830393 cache.go:56] Caching tarball of preloaded images
	I1205 20:24:26.457520  830393 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:24:26.459557  830393 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1205 20:24:26.459570  830393 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1205 20:24:26.494844  830393 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20053-823623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 20:24:29.895475  830393 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1205 20:24:30.331974  830393 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1205 20:24:30.332086  830393 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20053-823623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-350205 host does not exist
	  To start a cluster, run: "minikube start -p download-only-350205"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-350205
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (4.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-949612 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-949612 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.661548161s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (4.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1205 20:24:36.991004  830381 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1205 20:24:36.991081  830381 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-823623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-949612
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-949612: exit status 85 (66.424545ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-350205 | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC |                     |
	|         | -p download-only-350205        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC | 05 Dec 24 20:24 UTC |
	| delete  | -p download-only-350205        | download-only-350205 | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC | 05 Dec 24 20:24 UTC |
	| start   | -o=json --download-only        | download-only-949612 | jenkins | v1.34.0 | 05 Dec 24 20:24 UTC |                     |
	|         | -p download-only-949612        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:24:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:24:32.373403  830736 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:24:32.373541  830736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:24:32.373553  830736 out.go:358] Setting ErrFile to fd 2...
	I1205 20:24:32.373560  830736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:24:32.373745  830736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
	I1205 20:24:32.374402  830736 out.go:352] Setting JSON to true
	I1205 20:24:32.375393  830736 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11221,"bootTime":1733419051,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:24:32.375520  830736 start.go:139] virtualization: kvm guest
	I1205 20:24:32.377598  830736 out.go:97] [download-only-949612] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:24:32.377770  830736 notify.go:220] Checking for updates...
	I1205 20:24:32.379263  830736 out.go:169] MINIKUBE_LOCATION=20053
	I1205 20:24:32.380707  830736 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:24:32.382243  830736 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig
	I1205 20:24:32.383596  830736 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube
	I1205 20:24:32.385118  830736 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 20:24:32.387762  830736 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 20:24:32.388076  830736 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:24:32.410271  830736 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 20:24:32.410379  830736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:24:32.459289  830736 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2024-12-05 20:24:32.450232246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:24:32.459388  830736 docker.go:318] overlay module found
	I1205 20:24:32.461259  830736 out.go:97] Using the docker driver based on user configuration
	I1205 20:24:32.461295  830736 start.go:297] selected driver: docker
	I1205 20:24:32.461302  830736 start.go:901] validating driver "docker" against <nil>
	I1205 20:24:32.461413  830736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:24:32.506214  830736 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2024-12-05 20:24:32.497164287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:24:32.506436  830736 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:24:32.506982  830736 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1205 20:24:32.507159  830736 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 20:24:32.509054  830736 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-949612 host does not exist
	  To start a cluster, run: "minikube start -p download-only-949612"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-949612
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.11s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-384641 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-384641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-384641
--- PASS: TestDownloadOnlyKic (1.11s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I1205 20:24:38.800989  830381 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-451629 --alsologtostderr --binary-mirror http://127.0.0.1:40015 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-451629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-451629
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (59.46s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-265933 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-265933 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (56.934378773s)
helpers_test.go:175: Cleaning up "offline-crio-265933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-265933
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-265933: (2.530170636s)
--- PASS: TestOffline (59.46s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-583828
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-583828: exit status 85 (57.329355ms)

                                                
                                                
-- stdout --
	* Profile "addons-583828" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-583828"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-583828
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-583828: exit status 85 (58.189475ms)

                                                
                                                
-- stdout --
	* Profile "addons-583828" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-583828"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (149.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-583828 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-583828 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m29.563953756s)
--- PASS: TestAddons/Setup (149.56s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-583828 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-583828 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-583828 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-583828 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1d179be2-0ded-40c4-9d86-aeca4596dc5c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1d179be2-0ded-40c4-9d86-aeca4596dc5c] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.006630764s
addons_test.go:633: (dbg) Run:  kubectl --context addons-583828 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-583828 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-583828 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.302076ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-z49gz" [fe21bb58-8336-4e34-b5f4-ad786e9a2fac] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003258053s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fzjzn" [6dd2b29c-df34-4531-be7e-32c564376c8d] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003966043s
addons_test.go:331: (dbg) Run:  kubectl --context addons-583828 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-583828 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-583828 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.841208894s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 ip
2024/12/05 20:27:41 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xjp7c" [da66c2d9-665f-40be-b41c-0b4b132dca4d] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004907691s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-583828 addons disable inspektor-gadget --alsologtostderr -v=1: (5.646640397s)
--- PASS: TestAddons/parallel/InspektorGadget (10.65s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1205 20:27:34.117477  830381 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1205 20:27:34.122697  830381 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1205 20:27:34.122726  830381 kapi.go:107] duration metric: took 5.259448ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.275087ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-583828 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-583828 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [dd48e598-af52-4a0f-aa7b-9d7a6af835b3] Pending
helpers_test.go:344: "task-pv-pod" [dd48e598-af52-4a0f-aa7b-9d7a6af835b3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [dd48e598-af52-4a0f-aa7b-9d7a6af835b3] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.00732265s
addons_test.go:511: (dbg) Run:  kubectl --context addons-583828 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-583828 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-583828 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-583828 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-583828 delete pod task-pv-pod: (1.045838138s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-583828 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-583828 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-583828 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9a594990-cb2b-4f04-b490-2b7d23003f96] Pending
helpers_test.go:344: "task-pv-pod-restore" [9a594990-cb2b-4f04-b490-2b7d23003f96] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9a594990-cb2b-4f04-b490-2b7d23003f96] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004227634s
addons_test.go:553: (dbg) Run:  kubectl --context addons-583828 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-583828 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-583828 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-583828 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.619702819s)
--- PASS: TestAddons/parallel/CSI (60.41s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-583828 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-686k6" [365852eb-a49e-45f4-bb58-61bec24e0015] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-686k6" [365852eb-a49e-45f4-bb58-61bec24e0015] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-686k6" [365852eb-a49e-45f4-bb58-61bec24e0015] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.005029379s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-583828 addons disable headlamp --alsologtostderr -v=1: (5.666962832s)
--- PASS: TestAddons/parallel/Headlamp (17.44s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-6gd2h" [debe4e32-9c42-41b9-af6f-643f033d964f] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003643675s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (50.94s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-583828 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-583828 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583828 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ba97b3cc-4b13-46b9-892e-a793f63e1562] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ba97b3cc-4b13-46b9-892e-a793f63e1562] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ba97b3cc-4b13-46b9-892e-a793f63e1562] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004106032s
addons_test.go:906: (dbg) Run:  kubectl --context addons-583828 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 ssh "cat /opt/local-path-provisioner/pvc-7e18edaf-3638-4016-8b18-2b20bbc1377b_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-583828 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-583828 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-583828 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.065072902s)
--- PASS: TestAddons/parallel/LocalPath (50.94s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5zspz" [640da076-aa23-44e4-8e0d-03530daed62f] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00451591s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-9gksz" [c235f439-ce90-4aae-bb17-248a3e340906] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004099762s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-583828 addons disable yakd --alsologtostderr -v=1: (5.985246806s)
--- PASS: TestAddons/parallel/Yakd (11.99s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-rc729" [c2c85683-d2fe-4fe5-bee0-cb72305ef72e] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.004402201s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.12s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-583828
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-583828: (11.856979414s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-583828
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-583828
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-583828
--- PASS: TestAddons/StoppedEnableDisable (12.12s)

                                                
                                    
x
+
TestCertOptions (26.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-430843 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-430843 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.379369324s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-430843 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-430843 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-430843 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-430843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-430843
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-430843: (2.239509608s)
--- PASS: TestCertOptions (26.43s)

                                                
                                    
x
+
TestCertExpiration (226.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-847743 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-847743 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.053482231s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-847743 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1205 21:19:13.219358  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-847743 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (16.768933426s)
helpers_test.go:175: Cleaning up "cert-expiration-847743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-847743
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-847743: (2.537032305s)
--- PASS: TestCertExpiration (226.36s)

                                                
                                    
x
+
TestForceSystemdFlag (26.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-992782 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-992782 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.555004766s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-992782 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-992782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-992782
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-992782: (2.497637365s)
--- PASS: TestForceSystemdFlag (26.34s)

                                                
                                    
x
+
TestForceSystemdEnv (37.07s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-929036 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-929036 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.297353933s)
helpers_test.go:175: Cleaning up "force-systemd-env-929036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-929036
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-929036: (4.777249792s)
--- PASS: TestForceSystemdEnv (37.07s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.43s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1205 21:15:55.494430  830381 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 21:15:55.494605  830381 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1205 21:15:55.534248  830381 install.go:62] docker-machine-driver-kvm2: exit status 1
W1205 21:15:55.534651  830381 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1205 21:15:55.534725  830381 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate850997751/001/docker-machine-driver-kvm2
I1205 21:15:55.793314  830381 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate850997751/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020] Decompressors:map[bz2:0xc0004c7de0 gz:0xc0004c7de8 tar:0xc0004c7d90 tar.bz2:0xc0004c7da0 tar.gz:0xc0004c7db0 tar.xz:0xc0004c7dc0 tar.zst:0xc0004c7dd0 tbz2:0xc0004c7da0 tgz:0xc0004c7db0 txz:0xc0004c7dc0 tzst:0xc0004c7dd0 xz:0xc0004c7df0 zip:0xc0004c7e00 zst:0xc0004c7df8] Getters:map[file:0xc001a590f0 http:0xc0007f6cd0 https:0xc0007f6d20] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I1205 21:15:55.793393  830381 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate850997751/001/docker-machine-driver-kvm2
I1205 21:15:57.408269  830381 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 21:15:57.408371  830381 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1205 21:15:57.446641  830381 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1205 21:15:57.446676  830381 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1205 21:15:57.446746  830381 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1205 21:15:57.446777  830381 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate850997751/002/docker-machine-driver-kvm2
I1205 21:15:57.611617  830381 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate850997751/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020] Decompressors:map[bz2:0xc0004c7de0 gz:0xc0004c7de8 tar:0xc0004c7d90 tar.bz2:0xc0004c7da0 tar.gz:0xc0004c7db0 tar.xz:0xc0004c7dc0 tar.zst:0xc0004c7dd0 tbz2:0xc0004c7da0 tgz:0xc0004c7db0 txz:0xc0004c7dc0 tzst:0xc0004c7dd0 xz:0xc0004c7df0 zip:0xc0004c7e00 zst:0xc0004c7df8] Getters:map[file:0xc0013a3ac0 http:0xc0008024b0 https:0xc000802500] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I1205 21:15:57.611659  830381 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate850997751/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.43s)

                                                
                                    
x
+
TestErrorSpam/setup (23.7s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-342786 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-342786 --driver=docker  --container-runtime=crio
E1205 20:37:09.969194  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:37:09.975631  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:37:09.986991  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:37:10.008466  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:37:10.049862  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:37:10.131360  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:37:10.292974  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:37:10.615246  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:37:11.256692  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-342786 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-342786 --driver=docker  --container-runtime=crio: (23.703983446s)
--- PASS: TestErrorSpam/setup (23.70s)

                                                
                                    
x
+
TestErrorSpam/start (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 start --dry-run
--- PASS: TestErrorSpam/start (0.59s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 status
E1205 20:37:12.539015  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 unpause
E1205 20:37:15.100811  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 unpause
--- PASS: TestErrorSpam/unpause (1.71s)

                                                
                                    
x
+
TestErrorSpam/stop (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 stop: (1.189996725s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-342786 --log_dir /tmp/nospam-342786 stop
--- PASS: TestErrorSpam/stop (1.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20053-823623/.minikube/files/etc/test/nested/copy/830381/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.05s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-035676 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1205 20:37:30.464782  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:37:50.946625  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-035676 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (43.050472878s)
--- PASS: TestFunctional/serial/StartWithProxy (43.05s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (23.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1205 20:38:05.423684  830381 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-035676 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-035676 --alsologtostderr -v=8: (23.631077185s)
functional_test.go:663: soft start took 23.631937032s for "functional-035676" cluster.
I1205 20:38:29.055208  830381 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (23.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-035676 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-035676 cache add registry.k8s.io/pause:3.1: (1.038146932s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-035676 cache add registry.k8s.io/pause:3.3: (1.189278434s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 cache add registry.k8s.io/pause:latest
E1205 20:38:31.908663  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-035676 cache add registry.k8s.io/pause:latest: (1.032231208s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-035676 /tmp/TestFunctionalserialCacheCmdcacheadd_local1753958831/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 cache add minikube-local-cache-test:functional-035676
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-035676 cache add minikube-local-cache-test:functional-035676: (1.002199228s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 cache delete minikube-local-cache-test:functional-035676
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-035676
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035676 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (270.348918ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 kubectl -- --context functional-035676 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-035676 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (29.72s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-035676 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-035676 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (29.720517811s)
functional_test.go:761: restart took 29.720668931s for "functional-035676" cluster.
I1205 20:39:05.946012  830381 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (29.72s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-035676 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-035676 logs: (1.401173031s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 logs --file /tmp/TestFunctionalserialLogsFileCmd3471775503/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-035676 logs --file /tmp/TestFunctionalserialLogsFileCmd3471775503/001/logs.txt: (1.42447586s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-035676 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-035676
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-035676: exit status 115 (334.846648ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30593 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-035676 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035676 config get cpus: exit status 14 (94.045393ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035676 config get cpus: exit status 14 (58.195849ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (141.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-035676 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-035676 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 872652: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (141.28s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-035676 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-035676 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (157.46994ms)

                                                
                                                
-- stdout --
	* [functional-035676] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:39:24.021673  870572 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:39:24.021788  870572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:39:24.021796  870572 out.go:358] Setting ErrFile to fd 2...
	I1205 20:39:24.021801  870572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:39:24.022031  870572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
	I1205 20:39:24.022572  870572 out.go:352] Setting JSON to false
	I1205 20:39:24.023691  870572 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12113,"bootTime":1733419051,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:39:24.023799  870572 start.go:139] virtualization: kvm guest
	I1205 20:39:24.026050  870572 out.go:177] * [functional-035676] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:39:24.027513  870572 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:39:24.027519  870572 notify.go:220] Checking for updates...
	I1205 20:39:24.030104  870572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:39:24.031358  870572 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig
	I1205 20:39:24.032556  870572 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube
	I1205 20:39:24.033713  870572 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:39:24.034954  870572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:39:24.036850  870572 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:39:24.037544  870572 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:39:24.063626  870572 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 20:39:24.063719  870572 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:39:24.113226  870572 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2024-12-05 20:39:24.103916704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:39:24.113395  870572 docker.go:318] overlay module found
	I1205 20:39:24.115389  870572 out.go:177] * Using the docker driver based on existing profile
	I1205 20:39:24.116629  870572 start.go:297] selected driver: docker
	I1205 20:39:24.116641  870572 start.go:901] validating driver "docker" against &{Name:functional-035676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-035676 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:39:24.116763  870572 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:39:24.118717  870572 out.go:201] 
	W1205 20:39:24.119912  870572 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 20:39:24.121408  870572 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-035676 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-035676 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-035676 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (154.460337ms)

                                                
                                                
-- stdout --
	* [functional-035676] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:39:24.385769  870931 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:39:24.385888  870931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:39:24.385894  870931 out.go:358] Setting ErrFile to fd 2...
	I1205 20:39:24.385898  870931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:39:24.386220  870931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
	I1205 20:39:24.386774  870931 out.go:352] Setting JSON to false
	I1205 20:39:24.387885  870931 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12113,"bootTime":1733419051,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:39:24.388009  870931 start.go:139] virtualization: kvm guest
	I1205 20:39:24.390178  870931 out.go:177] * [functional-035676] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1205 20:39:24.391705  870931 notify.go:220] Checking for updates...
	I1205 20:39:24.391712  870931 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:39:24.393211  870931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:39:24.394693  870931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig
	I1205 20:39:24.395973  870931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube
	I1205 20:39:24.397199  870931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:39:24.398443  870931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:39:24.400113  870931 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:39:24.400531  870931 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:39:24.422225  870931 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 20:39:24.422323  870931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:39:24.477630  870931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2024-12-05 20:39:24.467300148 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:39:24.477769  870931 docker.go:318] overlay module found
	I1205 20:39:24.480254  870931 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1205 20:39:24.481752  870931 start.go:297] selected driver: docker
	I1205 20:39:24.481768  870931 start.go:901] validating driver "docker" against &{Name:functional-035676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-035676 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:39:24.481862  870931 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:39:24.484129  870931 out.go:201] 
	W1205 20:39:24.485549  870931 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 20:39:24.486824  870931 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-035676 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-035676 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bmhxn" [c14e7a91-09b6-4a9d-a8b2-00113ac87b19] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bmhxn" [c14e7a91-09b6-4a9d-a8b2-00113ac87b19] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003848821s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31751
functional_test.go:1675: http://192.168.49.2:31751: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-bmhxn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31751
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh -n functional-035676 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 cp functional-035676:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd97477047/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh -n functional-035676 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh -n functional-035676 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/830381/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "sudo cat /etc/test/nested/copy/830381/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/830381.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "sudo cat /etc/ssl/certs/830381.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/830381.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "sudo cat /usr/share/ca-certificates/830381.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/8303812.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "sudo cat /etc/ssl/certs/8303812.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/8303812.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "sudo cat /usr/share/ca-certificates/8303812.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-035676 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035676 ssh "sudo systemctl is-active docker": exit status 1 (257.156124ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035676 ssh "sudo systemctl is-active containerd": exit status 1 (255.374003ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-035676 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-035676 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-vmcv5" [34066d3d-a99e-44e1-9022-929a09692c4a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-vmcv5" [34066d3d-a99e-44e1-9022-929a09692c4a] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004703118s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "535.737867ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "92.769177ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-035676 /tmp/TestFunctionalparallelMountCmdany-port1242440238/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733431154431523584" to /tmp/TestFunctionalparallelMountCmdany-port1242440238/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733431154431523584" to /tmp/TestFunctionalparallelMountCmdany-port1242440238/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733431154431523584" to /tmp/TestFunctionalparallelMountCmdany-port1242440238/001/test-1733431154431523584
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035676 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (434.465486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 20:39:14.866335  830381 retry.go:31] will retry after 485.167062ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  5 20:39 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  5 20:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  5 20:39 test-1733431154431523584
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh cat /mount-9p/test-1733431154431523584
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-035676 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [284f3962-e755-4e7d-8baf-0dbcbd4cdc71] Pending
helpers_test.go:344: "busybox-mount" [284f3962-e755-4e7d-8baf-0dbcbd4cdc71] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [284f3962-e755-4e7d-8baf-0dbcbd4cdc71] Running
helpers_test.go:344: "busybox-mount" [284f3962-e755-4e7d-8baf-0dbcbd4cdc71] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [284f3962-e755-4e7d-8baf-0dbcbd4cdc71] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003991813s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-035676 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-035676 /tmp/TestFunctionalparallelMountCmdany-port1242440238/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.85s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "606.721551ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "59.158781ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-035676 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-035676 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-035676 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 868418: os: process already finished
helpers_test.go:508: unable to kill pid 868177: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-035676 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-035676 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-035676 /tmp/TestFunctionalparallelMountCmdspecific-port1463689980/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035676 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (290.230322ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 20:39:21.568781  830381 retry.go:31] will retry after 633.206024ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-035676 /tmp/TestFunctionalparallelMountCmdspecific-port1463689980/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035676 ssh "sudo umount -f /mount-9p": exit status 1 (263.642828ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-035676 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-035676 /tmp/TestFunctionalparallelMountCmdspecific-port1463689980/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 service list -o json
functional_test.go:1494: Took "308.020237ms" to run "out/minikube-linux-amd64 -p functional-035676 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31550
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-035676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1927881527/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-035676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1927881527/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-035676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1927881527/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035676 ssh "findmnt -T" /mount1: exit status 1 (321.937955ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 20:39:23.560458  830381 retry.go:31] will retry after 332.660427ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-035676 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-035676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1927881527/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-035676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1927881527/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-035676 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1927881527/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31550
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-035676 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-035676
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20241023-a345ebe4
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-035676 image ls --format short --alsologtostderr:
I1205 20:41:48.875452  874417 out.go:345] Setting OutFile to fd 1 ...
I1205 20:41:48.875739  874417 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:41:48.875749  874417 out.go:358] Setting ErrFile to fd 2...
I1205 20:41:48.875754  874417 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:41:48.875925  874417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
I1205 20:41:48.876598  874417 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:41:48.876711  874417 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:41:48.877149  874417 cli_runner.go:164] Run: docker container inspect functional-035676 --format={{.State.Status}}
I1205 20:41:48.895059  874417 ssh_runner.go:195] Run: systemctl --version
I1205 20:41:48.895110  874417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035676
I1205 20:41:48.913120  874417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/functional-035676/id_rsa Username:docker}
I1205 20:41:49.001624  874417 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-035676 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| docker.io/kindest/kindnetd              | v20241023-a345ebe4 | 9ca7e41918271 | 95MB   |
| localhost/minikube-local-cache-test     | functional-035676  | 2565f7d462e1c | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-035676 image ls --format table --alsologtostderr:
I1205 20:41:49.310978  874518 out.go:345] Setting OutFile to fd 1 ...
I1205 20:41:49.311107  874518 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:41:49.311117  874518 out.go:358] Setting ErrFile to fd 2...
I1205 20:41:49.311124  874518 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:41:49.311376  874518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
I1205 20:41:49.312073  874518 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:41:49.312182  874518 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:41:49.312554  874518 cli_runner.go:164] Run: docker container inspect functional-035676 --format={{.State.Status}}
I1205 20:41:49.331584  874518 ssh_runner.go:195] Run: systemctl --version
I1205 20:41:49.331640  874518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035676
I1205 20:41:49.350756  874518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/functional-035676/id_rsa Username:docker}
I1205 20:41:49.441849  874518 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-035676 image ls --format json --alsologtostderr:
[{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e9
8765c"],"repoTags":[],"size":"43824855"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"9278
3513"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5","repoDigests":["docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16","docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"],"repoTags":["docker.io/kindest/kindnetd:v20241023-a345ebe4"],"size":"94958644"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":
"2565f7d462e1ca9f19e0b72373720069fa42e1c0c086b09eaa977f30ad8a4039","repoDigests":["localhost/minikube-local-cache-test@sha256:9dfe440579f6db456d48f67a5938cc43fa3493cda1f089a5ba1fbc775dad77f4"],"repoTags":["localhost/minikube-local-cache-test:functional-035676"],"size":"3330"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5
da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-
minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-035676 image ls --format json --alsologtostderr:
I1205 20:41:49.092991  874468 out.go:345] Setting OutFile to fd 1 ...
I1205 20:41:49.093411  874468 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:41:49.093432  874468 out.go:358] Setting ErrFile to fd 2...
I1205 20:41:49.093440  874468 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:41:49.093868  874468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
I1205 20:41:49.095146  874468 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:41:49.095298  874468 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:41:49.095686  874468 cli_runner.go:164] Run: docker container inspect functional-035676 --format={{.State.Status}}
I1205 20:41:49.113560  874468 ssh_runner.go:195] Run: systemctl --version
I1205 20:41:49.113628  874468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035676
I1205 20:41:49.131160  874468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/functional-035676/id_rsa Username:docker}
I1205 20:41:49.221743  874468 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-035676 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5
repoDigests:
- docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16
- docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d
repoTags:
- docker.io/kindest/kindnetd:v20241023-a345ebe4
size: "94958644"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 2565f7d462e1ca9f19e0b72373720069fa42e1c0c086b09eaa977f30ad8a4039
repoDigests:
- localhost/minikube-local-cache-test@sha256:9dfe440579f6db456d48f67a5938cc43fa3493cda1f089a5ba1fbc775dad77f4
repoTags:
- localhost/minikube-local-cache-test:functional-035676
size: "3330"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-035676 image ls --format yaml --alsologtostderr:
I1205 20:41:49.535484  874583 out.go:345] Setting OutFile to fd 1 ...
I1205 20:41:49.535777  874583 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:41:49.535790  874583 out.go:358] Setting ErrFile to fd 2...
I1205 20:41:49.535794  874583 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:41:49.536024  874583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
I1205 20:41:49.536736  874583 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:41:49.536857  874583 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:41:49.537296  874583 cli_runner.go:164] Run: docker container inspect functional-035676 --format={{.State.Status}}
I1205 20:41:49.555074  874583 ssh_runner.go:195] Run: systemctl --version
I1205 20:41:49.555149  874583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035676
I1205 20:41:49.573983  874583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/functional-035676/id_rsa Username:docker}
I1205 20:41:49.661588  874583 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-035676 ssh pgrep buildkitd: exit status 1 (251.380925ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 image build -t localhost/my-image:functional-035676 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-035676 image build -t localhost/my-image:functional-035676 testdata/build --alsologtostderr: (1.602636467s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-035676 image build -t localhost/my-image:functional-035676 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 647659a75cb
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-035676
--> 4e82d07a589
Successfully tagged localhost/my-image:functional-035676
4e82d07a589922259f44b776c966e6feec9af3778a4406b1cf6a05ece71b968a
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-035676 image build -t localhost/my-image:functional-035676 testdata/build --alsologtostderr:
I1205 20:41:50.001939  874730 out.go:345] Setting OutFile to fd 1 ...
I1205 20:41:50.002760  874730 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:41:50.002773  874730 out.go:358] Setting ErrFile to fd 2...
I1205 20:41:50.002777  874730 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:41:50.002979  874730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
I1205 20:41:50.003614  874730 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:41:50.004222  874730 config.go:182] Loaded profile config "functional-035676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:41:50.004759  874730 cli_runner.go:164] Run: docker container inspect functional-035676 --format={{.State.Status}}
I1205 20:41:50.022240  874730 ssh_runner.go:195] Run: systemctl --version
I1205 20:41:50.022289  874730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-035676
I1205 20:41:50.039601  874730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/functional-035676/id_rsa Username:docker}
I1205 20:41:50.129918  874730 build_images.go:161] Building image from path: /tmp/build.3572270808.tar
I1205 20:41:50.129988  874730 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 20:41:50.139048  874730 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3572270808.tar
I1205 20:41:50.142631  874730 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3572270808.tar: stat -c "%s %y" /var/lib/minikube/build/build.3572270808.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3572270808.tar': No such file or directory
I1205 20:41:50.142664  874730 ssh_runner.go:362] scp /tmp/build.3572270808.tar --> /var/lib/minikube/build/build.3572270808.tar (3072 bytes)
I1205 20:41:50.166812  874730 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3572270808
I1205 20:41:50.175813  874730 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3572270808 -xf /var/lib/minikube/build/build.3572270808.tar
I1205 20:41:50.184606  874730 crio.go:315] Building image: /var/lib/minikube/build/build.3572270808
I1205 20:41:50.184673  874730 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-035676 /var/lib/minikube/build/build.3572270808 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1205 20:41:51.532052  874730 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-035676 /var/lib/minikube/build/build.3572270808 --cgroup-manager=cgroupfs: (1.347346045s)
I1205 20:41:51.532143  874730 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3572270808
I1205 20:41:51.540807  874730 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3572270808.tar
I1205 20:41:51.549536  874730 build_images.go:217] Built localhost/my-image:functional-035676 from /tmp/build.3572270808.tar
I1205 20:41:51.549576  874730 build_images.go:133] succeeded building to: functional-035676
I1205 20:41:51.549581  874730 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 image ls
E1205 20:42:09.967737  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 image rm kicbase/echo-server:functional-035676 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-035676 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-035676 tunnel --alsologtostderr] ...
E1205 20:47:09.966765  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-035676
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-035676
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-035676
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (105.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-845474 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-845474 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m44.754645553s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (105.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-845474 -- rollout status deployment/busybox: (3.322706144s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- exec busybox-7dff88458-bgbkd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- exec busybox-7dff88458-f6dcn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- exec busybox-7dff88458-fmm55 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- exec busybox-7dff88458-bgbkd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- exec busybox-7dff88458-f6dcn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- exec busybox-7dff88458-fmm55 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- exec busybox-7dff88458-bgbkd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- exec busybox-7dff88458-f6dcn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- exec busybox-7dff88458-fmm55 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- exec busybox-7dff88458-bgbkd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- exec busybox-7dff88458-bgbkd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- exec busybox-7dff88458-f6dcn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- exec busybox-7dff88458-f6dcn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- exec busybox-7dff88458-fmm55 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845474 -- exec busybox-7dff88458-fmm55 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (32.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-845474 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-845474 -v=7 --alsologtostderr: (31.687424382s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (32.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-845474 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp testdata/cp-test.txt ha-845474:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2943854232/001/cp-test_ha-845474.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474:/home/docker/cp-test.txt ha-845474-m02:/home/docker/cp-test_ha-845474_ha-845474-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m02 "sudo cat /home/docker/cp-test_ha-845474_ha-845474-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474:/home/docker/cp-test.txt ha-845474-m03:/home/docker/cp-test_ha-845474_ha-845474-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m03 "sudo cat /home/docker/cp-test_ha-845474_ha-845474-m03.txt"
E1205 20:52:09.967355  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474:/home/docker/cp-test.txt ha-845474-m04:/home/docker/cp-test_ha-845474_ha-845474-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m04 "sudo cat /home/docker/cp-test_ha-845474_ha-845474-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp testdata/cp-test.txt ha-845474-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2943854232/001/cp-test_ha-845474-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474-m02:/home/docker/cp-test.txt ha-845474:/home/docker/cp-test_ha-845474-m02_ha-845474.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474 "sudo cat /home/docker/cp-test_ha-845474-m02_ha-845474.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474-m02:/home/docker/cp-test.txt ha-845474-m03:/home/docker/cp-test_ha-845474-m02_ha-845474-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m03 "sudo cat /home/docker/cp-test_ha-845474-m02_ha-845474-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474-m02:/home/docker/cp-test.txt ha-845474-m04:/home/docker/cp-test_ha-845474-m02_ha-845474-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m04 "sudo cat /home/docker/cp-test_ha-845474-m02_ha-845474-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp testdata/cp-test.txt ha-845474-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2943854232/001/cp-test_ha-845474-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474-m03:/home/docker/cp-test.txt ha-845474:/home/docker/cp-test_ha-845474-m03_ha-845474.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474 "sudo cat /home/docker/cp-test_ha-845474-m03_ha-845474.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474-m03:/home/docker/cp-test.txt ha-845474-m02:/home/docker/cp-test_ha-845474-m03_ha-845474-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m02 "sudo cat /home/docker/cp-test_ha-845474-m03_ha-845474-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474-m03:/home/docker/cp-test.txt ha-845474-m04:/home/docker/cp-test_ha-845474-m03_ha-845474-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m04 "sudo cat /home/docker/cp-test_ha-845474-m03_ha-845474-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp testdata/cp-test.txt ha-845474-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2943854232/001/cp-test_ha-845474-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474-m04:/home/docker/cp-test.txt ha-845474:/home/docker/cp-test_ha-845474-m04_ha-845474.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474 "sudo cat /home/docker/cp-test_ha-845474-m04_ha-845474.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474-m04:/home/docker/cp-test.txt ha-845474-m02:/home/docker/cp-test_ha-845474-m04_ha-845474-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m02 "sudo cat /home/docker/cp-test_ha-845474-m04_ha-845474-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 cp ha-845474-m04:/home/docker/cp-test.txt ha-845474-m03:/home/docker/cp-test_ha-845474-m04_ha-845474-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 ssh -n ha-845474-m03 "sudo cat /home/docker/cp-test_ha-845474-m04_ha-845474-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-845474 node stop m02 -v=7 --alsologtostderr: (11.860775086s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845474 status -v=7 --alsologtostderr: exit status 7 (662.019292ms)

                                                
                                                
-- stdout --
	ha-845474
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845474-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-845474-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845474-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:52:34.240506  899350 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:52:34.240629  899350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:52:34.240637  899350 out.go:358] Setting ErrFile to fd 2...
	I1205 20:52:34.240641  899350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:52:34.240808  899350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
	I1205 20:52:34.241040  899350 out.go:352] Setting JSON to false
	I1205 20:52:34.241076  899350 mustload.go:65] Loading cluster: ha-845474
	I1205 20:52:34.241179  899350 notify.go:220] Checking for updates...
	I1205 20:52:34.241528  899350 config.go:182] Loaded profile config "ha-845474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:52:34.241553  899350 status.go:174] checking status of ha-845474 ...
	I1205 20:52:34.241977  899350 cli_runner.go:164] Run: docker container inspect ha-845474 --format={{.State.Status}}
	I1205 20:52:34.261153  899350 status.go:371] ha-845474 host status = "Running" (err=<nil>)
	I1205 20:52:34.261187  899350 host.go:66] Checking if "ha-845474" exists ...
	I1205 20:52:34.261505  899350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-845474
	I1205 20:52:34.280346  899350 host.go:66] Checking if "ha-845474" exists ...
	I1205 20:52:34.280654  899350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:52:34.280701  899350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-845474
	I1205 20:52:34.298474  899350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/ha-845474/id_rsa Username:docker}
	I1205 20:52:34.390354  899350 ssh_runner.go:195] Run: systemctl --version
	I1205 20:52:34.394591  899350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:52:34.405509  899350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:52:34.454200  899350 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:73 SystemTime:2024-12-05 20:52:34.444388446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 20:52:34.455003  899350 kubeconfig.go:125] found "ha-845474" server: "https://192.168.49.254:8443"
	I1205 20:52:34.455042  899350 api_server.go:166] Checking apiserver status ...
	I1205 20:52:34.455090  899350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:34.466222  899350 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup
	I1205 20:52:34.475376  899350 api_server.go:182] apiserver freezer: "7:freezer:/docker/f0c7ef6dd3d5be151f2f50d733bd94477cc90e8b6c7976b515dbf7e2f4b3cd5c/crio/crio-9e28d826bc2899d32711a2b1029aac6743be853b33c5c6253fdd9f7dc7ebc411"
	I1205 20:52:34.475448  899350 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f0c7ef6dd3d5be151f2f50d733bd94477cc90e8b6c7976b515dbf7e2f4b3cd5c/crio/crio-9e28d826bc2899d32711a2b1029aac6743be853b33c5c6253fdd9f7dc7ebc411/freezer.state
	I1205 20:52:34.483424  899350 api_server.go:204] freezer state: "THAWED"
	I1205 20:52:34.483457  899350 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1205 20:52:34.487416  899350 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1205 20:52:34.487444  899350 status.go:463] ha-845474 apiserver status = Running (err=<nil>)
	I1205 20:52:34.487459  899350 status.go:176] ha-845474 status: &{Name:ha-845474 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 20:52:34.487490  899350 status.go:174] checking status of ha-845474-m02 ...
	I1205 20:52:34.487804  899350 cli_runner.go:164] Run: docker container inspect ha-845474-m02 --format={{.State.Status}}
	I1205 20:52:34.505330  899350 status.go:371] ha-845474-m02 host status = "Stopped" (err=<nil>)
	I1205 20:52:34.505353  899350 status.go:384] host is not running, skipping remaining checks
	I1205 20:52:34.505360  899350 status.go:176] ha-845474-m02 status: &{Name:ha-845474-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 20:52:34.505388  899350 status.go:174] checking status of ha-845474-m03 ...
	I1205 20:52:34.505664  899350 cli_runner.go:164] Run: docker container inspect ha-845474-m03 --format={{.State.Status}}
	I1205 20:52:34.523611  899350 status.go:371] ha-845474-m03 host status = "Running" (err=<nil>)
	I1205 20:52:34.523642  899350 host.go:66] Checking if "ha-845474-m03" exists ...
	I1205 20:52:34.523915  899350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-845474-m03
	I1205 20:52:34.541938  899350 host.go:66] Checking if "ha-845474-m03" exists ...
	I1205 20:52:34.542266  899350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:52:34.542323  899350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-845474-m03
	I1205 20:52:34.559864  899350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/ha-845474-m03/id_rsa Username:docker}
	I1205 20:52:34.650528  899350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:52:34.662121  899350 kubeconfig.go:125] found "ha-845474" server: "https://192.168.49.254:8443"
	I1205 20:52:34.662153  899350 api_server.go:166] Checking apiserver status ...
	I1205 20:52:34.662191  899350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:34.672750  899350 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	I1205 20:52:34.681839  899350 api_server.go:182] apiserver freezer: "7:freezer:/docker/3243b73065ff6622624b1cc5ebee5ef806372da842990cb770c052649eb293b9/crio/crio-d9a1d749ff5645078946b3eee3ba6936332225b7565ed8ba776eeab237a8e8f8"
	I1205 20:52:34.681911  899350 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3243b73065ff6622624b1cc5ebee5ef806372da842990cb770c052649eb293b9/crio/crio-d9a1d749ff5645078946b3eee3ba6936332225b7565ed8ba776eeab237a8e8f8/freezer.state
	I1205 20:52:34.690406  899350 api_server.go:204] freezer state: "THAWED"
	I1205 20:52:34.690437  899350 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1205 20:52:34.694417  899350 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1205 20:52:34.694441  899350 status.go:463] ha-845474-m03 apiserver status = Running (err=<nil>)
	I1205 20:52:34.694450  899350 status.go:176] ha-845474-m03 status: &{Name:ha-845474-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 20:52:34.694465  899350 status.go:174] checking status of ha-845474-m04 ...
	I1205 20:52:34.694705  899350 cli_runner.go:164] Run: docker container inspect ha-845474-m04 --format={{.State.Status}}
	I1205 20:52:34.712019  899350 status.go:371] ha-845474-m04 host status = "Running" (err=<nil>)
	I1205 20:52:34.712048  899350 host.go:66] Checking if "ha-845474-m04" exists ...
	I1205 20:52:34.712333  899350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-845474-m04
	I1205 20:52:34.729863  899350 host.go:66] Checking if "ha-845474-m04" exists ...
	I1205 20:52:34.730139  899350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:52:34.730177  899350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-845474-m04
	I1205 20:52:34.748698  899350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/ha-845474-m04/id_rsa Username:docker}
	I1205 20:52:34.837961  899350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:52:34.849082  899350 status.go:176] ha-845474-m04 status: &{Name:ha-845474-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-845474 node start m02 -v=7 --alsologtostderr: (21.699726739s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-845474 status -v=7 --alsologtostderr: (1.19621131s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.09759765s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (197.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-845474 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-845474 -v=7 --alsologtostderr
E1205 20:53:33.036320  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-845474 -v=7 --alsologtostderr: (36.701457324s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-845474 --wait=true -v=7 --alsologtostderr
E1205 20:54:13.219679  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:13.226097  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:13.237565  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:13.259004  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:13.300455  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:13.381971  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:13.543549  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:13.865274  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:14.507343  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:15.789061  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:18.350525  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:23.472256  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:33.714226  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:54.195913  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:55:35.158148  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-845474 --wait=true -v=7 --alsologtostderr: (2m40.609984596s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-845474
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (197.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-845474 node delete m03 -v=7 --alsologtostderr: (10.716977016s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 stop -v=7 --alsologtostderr
E1205 20:56:57.079585  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-845474 stop -v=7 --alsologtostderr: (35.568960898s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845474 status -v=7 --alsologtostderr: exit status 7 (112.81433ms)

                                                
                                                
-- stdout --
	ha-845474
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-845474-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-845474-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:57:04.828475  917787 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:57:04.828596  917787 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:57:04.828601  917787 out.go:358] Setting ErrFile to fd 2...
	I1205 20:57:04.828605  917787 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:57:04.828802  917787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
	I1205 20:57:04.829019  917787 out.go:352] Setting JSON to false
	I1205 20:57:04.829057  917787 mustload.go:65] Loading cluster: ha-845474
	I1205 20:57:04.829129  917787 notify.go:220] Checking for updates...
	I1205 20:57:04.829531  917787 config.go:182] Loaded profile config "ha-845474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:57:04.829556  917787 status.go:174] checking status of ha-845474 ...
	I1205 20:57:04.830027  917787 cli_runner.go:164] Run: docker container inspect ha-845474 --format={{.State.Status}}
	I1205 20:57:04.849700  917787 status.go:371] ha-845474 host status = "Stopped" (err=<nil>)
	I1205 20:57:04.849741  917787 status.go:384] host is not running, skipping remaining checks
	I1205 20:57:04.849758  917787 status.go:176] ha-845474 status: &{Name:ha-845474 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 20:57:04.849783  917787 status.go:174] checking status of ha-845474-m02 ...
	I1205 20:57:04.850173  917787 cli_runner.go:164] Run: docker container inspect ha-845474-m02 --format={{.State.Status}}
	I1205 20:57:04.868367  917787 status.go:371] ha-845474-m02 host status = "Stopped" (err=<nil>)
	I1205 20:57:04.868396  917787 status.go:384] host is not running, skipping remaining checks
	I1205 20:57:04.868403  917787 status.go:176] ha-845474-m02 status: &{Name:ha-845474-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 20:57:04.868428  917787 status.go:174] checking status of ha-845474-m04 ...
	I1205 20:57:04.868682  917787 cli_runner.go:164] Run: docker container inspect ha-845474-m04 --format={{.State.Status}}
	I1205 20:57:04.885957  917787 status.go:371] ha-845474-m04 host status = "Stopped" (err=<nil>)
	I1205 20:57:04.885981  917787 status.go:384] host is not running, skipping remaining checks
	I1205 20:57:04.885993  917787 status.go:176] ha-845474-m04 status: &{Name:ha-845474-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (106.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-845474 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1205 20:57:09.967375  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-845474 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m45.387684632s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (106.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-845474 --control-plane -v=7 --alsologtostderr
E1205 20:59:13.219290  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-845474 --control-plane -v=7 --alsologtostderr: (39.042214626s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-845474 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.28s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-777213 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1205 20:59:40.921934  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-777213 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (40.276045794s)
--- PASS: TestJSONOutput/start/Command (40.28s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-777213 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-777213 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-777213 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-777213 --output=json --user=testUser: (5.76937473s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-714995 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-714995 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.964373ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"111ca825-0b47-4463-b727-3fd1142bd4c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-714995] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4569b93b-3809-4cbc-a7bf-8f07d6338cb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20053"}}
	{"specversion":"1.0","id":"b5f003cc-1e20-467b-b5b6-acacf2487ae0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1d895c8a-3f23-44bd-8dbe-8af30f28c862","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig"}}
	{"specversion":"1.0","id":"93984e8e-df4a-4fdf-aa2a-0bb5d823adac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube"}}
	{"specversion":"1.0","id":"4b9c78cb-b131-47a4-aa7b-ea9a6e52c962","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ad7eaaa2-1528-45cc-9d62-b455b1980f2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"977866d4-c5f0-4c72-97a2-f7c3f6b572e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-714995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-714995
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.52s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-009181 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-009181 --network=: (27.381906772s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-009181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-009181
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-009181: (2.120397252s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.52s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-602350 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-602350 --network=bridge: (24.842874276s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-602350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-602350
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-602350: (1.959872667s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.82s)

                                                
                                    
x
+
TestKicExistingNetwork (22.9s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1205 21:01:28.794688  830381 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1205 21:01:28.811807  830381 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1205 21:01:28.811887  830381 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1205 21:01:28.811911  830381 cli_runner.go:164] Run: docker network inspect existing-network
W1205 21:01:28.828352  830381 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1205 21:01:28.828390  830381 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1205 21:01:28.828422  830381 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1205 21:01:28.828591  830381 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1205 21:01:28.846580  830381 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4b567fdc047b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d3:bd:50:e8} reservation:<nil>}
I1205 21:01:28.847147  830381 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ba5e60}
I1205 21:01:28.847183  830381 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1205 21:01:28.847236  830381 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1205 21:01:28.912423  830381 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-449698 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-449698 --network=existing-network: (20.87889806s)
helpers_test.go:175: Cleaning up "existing-network-449698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-449698
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-449698: (1.865492119s)
I1205 21:01:51.674694  830381 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.90s)

                                                
                                    
x
+
TestKicCustomSubnet (24.31s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-865927 --subnet=192.168.60.0/24
E1205 21:02:09.970124  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-865927 --subnet=192.168.60.0/24: (22.296501958s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-865927 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-865927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-865927
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-865927: (1.99388918s)
--- PASS: TestKicCustomSubnet (24.31s)

                                                
                                    
x
+
TestKicStaticIP (23.63s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-396614 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-396614 --static-ip=192.168.200.200: (21.400978939s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-396614 ip
helpers_test.go:175: Cleaning up "static-ip-396614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-396614
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-396614: (2.099068548s)
--- PASS: TestKicStaticIP (23.63s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (50.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-124639 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-124639 --driver=docker  --container-runtime=crio: (24.196457346s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-147845 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-147845 --driver=docker  --container-runtime=crio: (20.828834821s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-124639
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-147845
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-147845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-147845
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-147845: (1.863977061s)
helpers_test.go:175: Cleaning up "first-124639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-124639
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-124639: (2.242569906s)
--- PASS: TestMinikubeProfile (50.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-194996 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-194996 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.594731228s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-194996 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-214519 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-214519 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.252932242s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-214519 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-194996 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-194996 --alsologtostderr -v=5: (1.60824691s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-214519 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-214519
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-214519: (1.182172999s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.16s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-214519
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-214519: (6.155875942s)
--- PASS: TestMountStart/serial/RestartStopped (7.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-214519 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (72.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-515925 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1205 21:04:13.219537  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-515925 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m11.791830645s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (72.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-515925 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-515925 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-515925 -- rollout status deployment/busybox: (2.39515305s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-515925 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-515925 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-515925 -- exec busybox-7dff88458-j2cgz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-515925 -- exec busybox-7dff88458-nl8d8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-515925 -- exec busybox-7dff88458-j2cgz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-515925 -- exec busybox-7dff88458-nl8d8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-515925 -- exec busybox-7dff88458-j2cgz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-515925 -- exec busybox-7dff88458-nl8d8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-515925 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-515925 -- exec busybox-7dff88458-j2cgz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-515925 -- exec busybox-7dff88458-j2cgz -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-515925 -- exec busybox-7dff88458-nl8d8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-515925 -- exec busybox-7dff88458-nl8d8 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-515925 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-515925 -v 3 --alsologtostderr: (28.231997975s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.83s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-515925 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 cp testdata/cp-test.txt multinode-515925:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 cp multinode-515925:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3831423991/001/cp-test_multinode-515925.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 cp multinode-515925:/home/docker/cp-test.txt multinode-515925-m02:/home/docker/cp-test_multinode-515925_multinode-515925-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925-m02 "sudo cat /home/docker/cp-test_multinode-515925_multinode-515925-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 cp multinode-515925:/home/docker/cp-test.txt multinode-515925-m03:/home/docker/cp-test_multinode-515925_multinode-515925-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925-m03 "sudo cat /home/docker/cp-test_multinode-515925_multinode-515925-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 cp testdata/cp-test.txt multinode-515925-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 cp multinode-515925-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3831423991/001/cp-test_multinode-515925-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 cp multinode-515925-m02:/home/docker/cp-test.txt multinode-515925:/home/docker/cp-test_multinode-515925-m02_multinode-515925.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925 "sudo cat /home/docker/cp-test_multinode-515925-m02_multinode-515925.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 cp multinode-515925-m02:/home/docker/cp-test.txt multinode-515925-m03:/home/docker/cp-test_multinode-515925-m02_multinode-515925-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925-m03 "sudo cat /home/docker/cp-test_multinode-515925-m02_multinode-515925-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 cp testdata/cp-test.txt multinode-515925-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 cp multinode-515925-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3831423991/001/cp-test_multinode-515925-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 cp multinode-515925-m03:/home/docker/cp-test.txt multinode-515925:/home/docker/cp-test_multinode-515925-m03_multinode-515925.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925 "sudo cat /home/docker/cp-test_multinode-515925-m03_multinode-515925.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 cp multinode-515925-m03:/home/docker/cp-test.txt multinode-515925-m02:/home/docker/cp-test_multinode-515925-m03_multinode-515925-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 ssh -n multinode-515925-m02 "sudo cat /home/docker/cp-test_multinode-515925-m03_multinode-515925-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-515925 node stop m03: (1.18666409s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-515925 status: exit status 7 (468.613827ms)

                                                
                                                
-- stdout --
	multinode-515925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-515925-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-515925-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-515925 status --alsologtostderr: exit status 7 (466.769502ms)

                                                
                                                
-- stdout --
	multinode-515925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-515925-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-515925-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 21:05:53.975084  983913 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:05:53.975204  983913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:05:53.975212  983913 out.go:358] Setting ErrFile to fd 2...
	I1205 21:05:53.975216  983913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:05:53.975391  983913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
	I1205 21:05:53.975575  983913 out.go:352] Setting JSON to false
	I1205 21:05:53.975610  983913 mustload.go:65] Loading cluster: multinode-515925
	I1205 21:05:53.975701  983913 notify.go:220] Checking for updates...
	I1205 21:05:53.976006  983913 config.go:182] Loaded profile config "multinode-515925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:05:53.976025  983913 status.go:174] checking status of multinode-515925 ...
	I1205 21:05:53.976447  983913 cli_runner.go:164] Run: docker container inspect multinode-515925 --format={{.State.Status}}
	I1205 21:05:53.996025  983913 status.go:371] multinode-515925 host status = "Running" (err=<nil>)
	I1205 21:05:53.996070  983913 host.go:66] Checking if "multinode-515925" exists ...
	I1205 21:05:53.996446  983913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-515925
	I1205 21:05:54.013966  983913 host.go:66] Checking if "multinode-515925" exists ...
	I1205 21:05:54.014259  983913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 21:05:54.014311  983913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-515925
	I1205 21:05:54.032165  983913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/multinode-515925/id_rsa Username:docker}
	I1205 21:05:54.122226  983913 ssh_runner.go:195] Run: systemctl --version
	I1205 21:05:54.126279  983913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:05:54.137092  983913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 21:05:54.183421  983913 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:63 SystemTime:2024-12-05 21:05:54.174171404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 21:05:54.183966  983913 kubeconfig.go:125] found "multinode-515925" server: "https://192.168.67.2:8443"
	I1205 21:05:54.183993  983913 api_server.go:166] Checking apiserver status ...
	I1205 21:05:54.184028  983913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:05:54.194593  983913 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup
	I1205 21:05:54.203281  983913 api_server.go:182] apiserver freezer: "7:freezer:/docker/e1acb3208059a0634cbf70f5acc2e6b84d3b78b2e5bab49ce2b0a9381e2fb766/crio/crio-0c2dc0fc329727c03d906633db312bbb9335d54e86ed994ff4cac3ebe231874d"
	I1205 21:05:54.203341  983913 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e1acb3208059a0634cbf70f5acc2e6b84d3b78b2e5bab49ce2b0a9381e2fb766/crio/crio-0c2dc0fc329727c03d906633db312bbb9335d54e86ed994ff4cac3ebe231874d/freezer.state
	I1205 21:05:54.211177  983913 api_server.go:204] freezer state: "THAWED"
	I1205 21:05:54.211215  983913 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1205 21:05:54.215840  983913 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1205 21:05:54.215864  983913 status.go:463] multinode-515925 apiserver status = Running (err=<nil>)
	I1205 21:05:54.215874  983913 status.go:176] multinode-515925 status: &{Name:multinode-515925 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 21:05:54.215889  983913 status.go:174] checking status of multinode-515925-m02 ...
	I1205 21:05:54.216160  983913 cli_runner.go:164] Run: docker container inspect multinode-515925-m02 --format={{.State.Status}}
	I1205 21:05:54.233059  983913 status.go:371] multinode-515925-m02 host status = "Running" (err=<nil>)
	I1205 21:05:54.233090  983913 host.go:66] Checking if "multinode-515925-m02" exists ...
	I1205 21:05:54.233339  983913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-515925-m02
	I1205 21:05:54.250399  983913 host.go:66] Checking if "multinode-515925-m02" exists ...
	I1205 21:05:54.250655  983913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 21:05:54.250699  983913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-515925-m02
	I1205 21:05:54.268642  983913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/20053-823623/.minikube/machines/multinode-515925-m02/id_rsa Username:docker}
	I1205 21:05:54.362197  983913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:05:54.372739  983913 status.go:176] multinode-515925-m02 status: &{Name:multinode-515925-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1205 21:05:54.372784  983913 status.go:174] checking status of multinode-515925-m03 ...
	I1205 21:05:54.373103  983913 cli_runner.go:164] Run: docker container inspect multinode-515925-m03 --format={{.State.Status}}
	I1205 21:05:54.390163  983913 status.go:371] multinode-515925-m03 host status = "Stopped" (err=<nil>)
	I1205 21:05:54.390192  983913 status.go:384] host is not running, skipping remaining checks
	I1205 21:05:54.390202  983913 status.go:176] multinode-515925-m03 status: &{Name:multinode-515925-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-515925 node start m03 -v=7 --alsologtostderr: (8.789629154s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (102.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-515925
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-515925
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-515925: (24.802489704s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-515925 --wait=true -v=8 --alsologtostderr
E1205 21:07:09.967345  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-515925 --wait=true -v=8 --alsologtostderr: (1m17.163735254s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-515925
--- PASS: TestMultiNode/serial/RestartKeepsNodes (102.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-515925 node delete m03: (4.736630903s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-515925 stop: (23.579408333s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-515925 status: exit status 7 (87.087333ms)

                                                
                                                
-- stdout --
	multinode-515925
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-515925-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-515925 status --alsologtostderr: exit status 7 (86.435806ms)

                                                
                                                
-- stdout --
	multinode-515925
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-515925-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 21:08:14.942698  993640 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:08:14.942845  993640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:08:14.942857  993640 out.go:358] Setting ErrFile to fd 2...
	I1205 21:08:14.942863  993640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:08:14.943063  993640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
	I1205 21:08:14.943241  993640 out.go:352] Setting JSON to false
	I1205 21:08:14.943284  993640 mustload.go:65] Loading cluster: multinode-515925
	I1205 21:08:14.943373  993640 notify.go:220] Checking for updates...
	I1205 21:08:14.943762  993640 config.go:182] Loaded profile config "multinode-515925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:08:14.943789  993640 status.go:174] checking status of multinode-515925 ...
	I1205 21:08:14.944265  993640 cli_runner.go:164] Run: docker container inspect multinode-515925 --format={{.State.Status}}
	I1205 21:08:14.961707  993640 status.go:371] multinode-515925 host status = "Stopped" (err=<nil>)
	I1205 21:08:14.961731  993640 status.go:384] host is not running, skipping remaining checks
	I1205 21:08:14.961742  993640 status.go:176] multinode-515925 status: &{Name:multinode-515925 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 21:08:14.961777  993640 status.go:174] checking status of multinode-515925-m02 ...
	I1205 21:08:14.962017  993640 cli_runner.go:164] Run: docker container inspect multinode-515925-m02 --format={{.State.Status}}
	I1205 21:08:14.978684  993640 status.go:371] multinode-515925-m02 host status = "Stopped" (err=<nil>)
	I1205 21:08:14.978722  993640 status.go:384] host is not running, skipping remaining checks
	I1205 21:08:14.978734  993640 status.go:176] multinode-515925-m02 status: &{Name:multinode-515925-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-515925 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-515925 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (48.901566609s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-515925 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.49s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-515925
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-515925-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-515925-m02 --driver=docker  --container-runtime=crio: exit status 14 (74.898711ms)

                                                
                                                
-- stdout --
	* [multinode-515925-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-515925-m02' is duplicated with machine name 'multinode-515925-m02' in profile 'multinode-515925'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-515925-m03 --driver=docker  --container-runtime=crio
E1205 21:09:13.221116  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-515925-m03 --driver=docker  --container-runtime=crio: (23.200647231s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-515925
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-515925: exit status 80 (270.201069ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-515925 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-515925-m03 already exists in multinode-515925-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-515925-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-515925-m03: (1.888730093s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.49s)

                                                
                                    
x
+
TestPreload (109.49s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-009472 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1205 21:10:13.038868  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:10:36.283908  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-009472 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m19.893145229s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-009472 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-009472 image pull gcr.io/k8s-minikube/busybox: (1.165399449s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-009472
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-009472: (5.714488299s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-009472 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-009472 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (20.160311338s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-009472 image list
helpers_test.go:175: Cleaning up "test-preload-009472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-009472
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-009472: (2.326965885s)
--- PASS: TestPreload (109.49s)

                                                
                                    
x
+
TestScheduledStopUnix (98.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-976688 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-976688 --memory=2048 --driver=docker  --container-runtime=crio: (21.772809957s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-976688 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-976688 -n scheduled-stop-976688
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-976688 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1205 21:11:45.614970  830381 retry.go:31] will retry after 125.578µs: open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/scheduled-stop-976688/pid: no such file or directory
I1205 21:11:45.616184  830381 retry.go:31] will retry after 122.407µs: open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/scheduled-stop-976688/pid: no such file or directory
I1205 21:11:45.617339  830381 retry.go:31] will retry after 145.472µs: open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/scheduled-stop-976688/pid: no such file or directory
I1205 21:11:45.618499  830381 retry.go:31] will retry after 259.611µs: open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/scheduled-stop-976688/pid: no such file or directory
I1205 21:11:45.619593  830381 retry.go:31] will retry after 550.042µs: open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/scheduled-stop-976688/pid: no such file or directory
I1205 21:11:45.620715  830381 retry.go:31] will retry after 501.902µs: open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/scheduled-stop-976688/pid: no such file or directory
I1205 21:11:45.621863  830381 retry.go:31] will retry after 700.832µs: open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/scheduled-stop-976688/pid: no such file or directory
I1205 21:11:45.622992  830381 retry.go:31] will retry after 1.47499ms: open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/scheduled-stop-976688/pid: no such file or directory
I1205 21:11:45.625203  830381 retry.go:31] will retry after 3.280861ms: open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/scheduled-stop-976688/pid: no such file or directory
I1205 21:11:45.629438  830381 retry.go:31] will retry after 3.959365ms: open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/scheduled-stop-976688/pid: no such file or directory
I1205 21:11:45.633691  830381 retry.go:31] will retry after 7.578325ms: open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/scheduled-stop-976688/pid: no such file or directory
I1205 21:11:45.641938  830381 retry.go:31] will retry after 9.845116ms: open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/scheduled-stop-976688/pid: no such file or directory
I1205 21:11:45.652180  830381 retry.go:31] will retry after 9.023527ms: open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/scheduled-stop-976688/pid: no such file or directory
I1205 21:11:45.661459  830381 retry.go:31] will retry after 26.401346ms: open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/scheduled-stop-976688/pid: no such file or directory
I1205 21:11:45.688720  830381 retry.go:31] will retry after 39.956258ms: open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/scheduled-stop-976688/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-976688 --cancel-scheduled
E1205 21:12:09.969574  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-976688 -n scheduled-stop-976688
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-976688
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-976688 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-976688
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-976688: exit status 7 (71.484216ms)

                                                
                                                
-- stdout --
	scheduled-stop-976688
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-976688 -n scheduled-stop-976688
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-976688 -n scheduled-stop-976688: exit status 7 (72.560594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-976688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-976688
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-976688: (5.032761325s)
--- PASS: TestScheduledStopUnix (98.18s)

                                                
                                    
x
+
TestInsufficientStorage (13.13s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-680763 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-680763 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.702114063s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d055e822-e842-4b13-8474-28873a1bf032","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-680763] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"20d4f657-5380-4809-bf58-a65c635967c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20053"}}
	{"specversion":"1.0","id":"aa37eff4-4497-4198-afae-ebfd7aebad7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c387a7a7-3e39-4299-ad73-c16383a74aa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig"}}
	{"specversion":"1.0","id":"43a56988-de95-4a7c-b3e2-8bf539cad52d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube"}}
	{"specversion":"1.0","id":"ab4d6f22-9067-4879-94ba-8c860d9ccb31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fbb33525-c9c1-4870-a6ab-4a498085c59c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2436a4a4-079d-4e3a-8e77-40ed9f5fb67c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"105726af-cd6c-4398-b71b-71b7be5e0ddc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8eb8c4e0-43c4-4a83-b9ae-e0ea1f78b624","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d67ba4e-fe98-42cd-aa25-72ad588db7e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"f31d7623-df8e-4a31-ae3a-0ef3aa57b9f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-680763\" primary control-plane node in \"insufficient-storage-680763\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"083439f4-c6e8-4402-931b-ac2b671778e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1730888964-19917 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d0ea8d08-6247-429a-bb7a-0fcbc30886db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"999d149f-3790-4c61-837c-c8ca9d0fdbce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-680763 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-680763 --output=json --layout=cluster: exit status 7 (274.136971ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-680763","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-680763","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:13:12.562136 1016170 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-680763" does not appear in /home/jenkins/minikube-integration/20053-823623/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-680763 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-680763 --output=json --layout=cluster: exit status 7 (264.623986ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-680763","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-680763","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:13:12.827603 1016269 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-680763" does not appear in /home/jenkins/minikube-integration/20053-823623/kubeconfig
	E1205 21:13:12.838305 1016269 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/insufficient-storage-680763/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-680763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-680763
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-680763: (1.890399588s)
--- PASS: TestInsufficientStorage (13.13s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3052326986 start -p running-upgrade-724076 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3052326986 start -p running-upgrade-724076 --memory=2200 --vm-driver=docker  --container-runtime=crio: (24.96992862s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-724076 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1205 21:17:09.966853  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-724076 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.672933266s)
helpers_test.go:175: Cleaning up "running-upgrade-724076" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-724076
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-724076: (7.988235333s)
--- PASS: TestRunningBinaryUpgrade (64.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (354.87s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-684343 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-684343 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.409980242s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-684343
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-684343: (2.468305405s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-684343 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-684343 status --format={{.Host}}: exit status 7 (113.505782ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-684343 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-684343 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.183733198s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-684343 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-684343 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-684343 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (77.415551ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-684343] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-684343
	    minikube start -p kubernetes-upgrade-684343 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6843432 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-684343 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-684343 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-684343 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.37077746s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-684343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-684343
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-684343: (2.174807969s)
--- PASS: TestKubernetesUpgrade (354.87s)

                                                
                                    
x
+
TestMissingContainerUpgrade (143.44s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1263822092 start -p missing-upgrade-738429 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1263822092 start -p missing-upgrade-738429 --memory=2200 --driver=docker  --container-runtime=crio: (1m7.092260803s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-738429
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-738429: (17.389355343s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-738429
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-738429 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-738429 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.705523341s)
helpers_test.go:175: Cleaning up "missing-upgrade-738429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-738429
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-738429: (3.801460748s)
--- PASS: TestMissingContainerUpgrade (143.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-294941 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-294941 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (83.489747ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-294941] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-294941 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-294941 --driver=docker  --container-runtime=crio: (34.804685887s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-294941 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3052505015 start -p stopped-upgrade-298866 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3052505015 start -p stopped-upgrade-298866 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m5.656754438s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3052505015 -p stopped-upgrade-298866 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3052505015 -p stopped-upgrade-298866 stop: (2.27606902s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-298866 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-298866 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.995981746s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (99.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (13.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-294941 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-294941 --no-kubernetes --driver=docker  --container-runtime=crio: (11.039799946s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-294941 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-294941 status -o json: exit status 2 (370.24037ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-294941","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-294941
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-294941: (2.107917781s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (13.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-294941 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-294941 --no-kubernetes --driver=docker  --container-runtime=crio: (8.177589095s)
--- PASS: TestNoKubernetes/serial/Start (8.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-294941 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-294941 "sudo systemctl is-active --quiet service kubelet": exit status 1 (377.275921ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (5.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E1205 21:14:13.219283  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (4.246957458s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (5.06s)

                                                
                                    
x
+
TestPause/serial/Start (48.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-128782 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-128782 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (48.888018184s)
--- PASS: TestPause/serial/Start (48.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-294941
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-294941: (2.883575576s)
--- PASS: TestNoKubernetes/serial/Stop (2.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-294941 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-294941 --driver=docker  --container-runtime=crio: (6.620597055s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-294941 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-294941 "sudo systemctl is-active --quiet service kubelet": exit status 1 (265.085114ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-298866
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-128782 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-128782 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.328091494s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.34s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-128782 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-128782 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-128782 --output=json --layout=cluster: exit status 2 (369.294506ms)

                                                
                                                
-- stdout --
	{"Name":"pause-128782","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-128782","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.06s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-128782 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-128782 --alsologtostderr -v=5: (1.057550528s)
--- PASS: TestPause/serial/Unpause (1.06s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.98s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-128782 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.98s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.88s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-128782 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-128782 --alsologtostderr -v=5: (2.875203954s)
--- PASS: TestPause/serial/DeletePaused (2.88s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.439954122s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-128782
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-128782: exit status 1 (17.058678ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-128782: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-826012 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-826012 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (157.21045ms)

                                                
                                                
-- stdout --
	* [false-826012] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 21:15:41.771642 1056673 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:15:41.771887 1056673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:15:41.771895 1056673 out.go:358] Setting ErrFile to fd 2...
	I1205 21:15:41.771899 1056673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:15:41.772111 1056673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-823623/.minikube/bin
	I1205 21:15:41.772754 1056673 out.go:352] Setting JSON to false
	I1205 21:15:41.773929 1056673 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":14291,"bootTime":1733419051,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:15:41.774066 1056673 start.go:139] virtualization: kvm guest
	I1205 21:15:41.776503 1056673 out.go:177] * [false-826012] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:15:41.778163 1056673 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:15:41.778219 1056673 notify.go:220] Checking for updates...
	I1205 21:15:41.780732 1056673 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:15:41.782140 1056673 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-823623/kubeconfig
	I1205 21:15:41.783456 1056673 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-823623/.minikube
	I1205 21:15:41.784841 1056673 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:15:41.786388 1056673 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:15:41.788397 1056673 config.go:182] Loaded profile config "cert-expiration-847743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:15:41.788523 1056673 config.go:182] Loaded profile config "kubernetes-upgrade-684343": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:15:41.788685 1056673 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:15:41.813273 1056673 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 21:15:41.813375 1056673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 21:15:41.865787 1056673 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:70 SystemTime:2024-12-05 21:15:41.855927891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 21:15:41.865896 1056673 docker.go:318] overlay module found
	I1205 21:15:41.868455 1056673 out.go:177] * Using the docker driver based on user configuration
	I1205 21:15:41.870290 1056673 start.go:297] selected driver: docker
	I1205 21:15:41.870305 1056673 start.go:901] validating driver "docker" against <nil>
	I1205 21:15:41.870327 1056673 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:15:41.872863 1056673 out.go:201] 
	W1205 21:15:41.874458 1056673 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1205 21:15:41.875716 1056673 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-826012 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-826012

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-826012

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-826012

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-826012

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-826012

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-826012

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-826012

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-826012

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-826012

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-826012

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-826012

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-826012" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-826012" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20053-823623/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Dec 2024 21:15:31 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-684343
contexts:
- context:
cluster: kubernetes-upgrade-684343
user: kubernetes-upgrade-684343
name: kubernetes-upgrade-684343
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-684343
user:
client-certificate: /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/kubernetes-upgrade-684343/client.crt
client-key: /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/kubernetes-upgrade-684343/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-826012

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-826012"

                                                
                                                
----------------------- debugLogs end: false-826012 [took: 4.258947948s] --------------------------------
helpers_test.go:175: Cleaning up "false-826012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-826012
--- PASS: TestNetworkPlugins/group/false (4.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (131.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-540296 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-540296 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m11.474021472s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (131.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-328335 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-328335 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (55.535569029s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-328335 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [924ebd1d-ce68-4533-8cc7-480352e170c7] Pending
helpers_test.go:344: "busybox" [924ebd1d-ce68-4533-8cc7-480352e170c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [924ebd1d-ce68-4533-8cc7-480352e170c7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00436227s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-328335 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-328335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-328335 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-328335 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-328335 --alsologtostderr -v=3: (11.857420861s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-540296 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1f9ef3cb-1858-4b65-af7e-3ad99ab8d64a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1f9ef3cb-1858-4b65-af7e-3ad99ab8d64a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003544883s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-540296 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328335 -n no-preload-328335
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328335 -n no-preload-328335: exit status 7 (89.86543ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-328335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (299.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-328335 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-328335 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m59.644198904s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328335 -n no-preload-328335
E1205 21:23:37.131198  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:23:37.137651  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:23:37.149313  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:23:37.171182  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (299.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-540296 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-540296 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-540296 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-540296 --alsologtostderr -v=3: (13.02825946s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-540296 -n old-k8s-version-540296
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-540296 -n old-k8s-version-540296: exit status 7 (80.614954ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-540296 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (145.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-540296 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-540296 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m24.811451028s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-540296 -n old-k8s-version-540296
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (145.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-564053 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-564053 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (41.886534036s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-564053 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [418a0599-5f05-4106-a96a-be1985f79507] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [418a0599-5f05-4106-a96a-be1985f79507] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004088342s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-564053 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-564053 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-564053 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-564053 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-564053 --alsologtostderr -v=3: (13.804653195s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-278063 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-278063 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (51.102377843s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-564053 -n embed-certs-564053
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-564053 -n embed-certs-564053: exit status 7 (98.524036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-564053 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-564053 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-564053 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m22.857135325s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-564053 -n embed-certs-564053
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-278063 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [50836847-691a-4ae3-8ee6-db836e1dfac4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [50836847-691a-4ae3-8ee6-db836e1dfac4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.004741275s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-278063 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-278063 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-278063 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-278063 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-278063 --alsologtostderr -v=3: (11.932646957s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mrs8v" [228ff59c-4e34-4b9f-b26c-0b931383d079] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004087755s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mrs8v" [228ff59c-4e34-4b9f-b26c-0b931383d079] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003764406s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-540296 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-278063 -n default-k8s-diff-port-278063
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-278063 -n default-k8s-diff-port-278063: exit status 7 (72.653425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-278063 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (275.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-278063 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-278063 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m34.75043906s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-278063 -n default-k8s-diff-port-278063
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (275.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-540296 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-540296 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-540296 -n old-k8s-version-540296
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-540296 -n old-k8s-version-540296: exit status 2 (297.940866ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-540296 -n old-k8s-version-540296
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-540296 -n old-k8s-version-540296: exit status 2 (297.327201ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-540296 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-540296 -n old-k8s-version-540296
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-540296 -n old-k8s-version-540296
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-143133 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1205 21:22:09.967469  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-143133 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (30.813297801s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-143133 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-143133 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-143133 --alsologtostderr -v=3: (2.092854136s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-143133 -n newest-cni-143133
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-143133 -n newest-cni-143133: exit status 7 (81.478921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-143133 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-143133 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-143133 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (12.799705766s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-143133 -n newest-cni-143133
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-143133 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-143133 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-143133 -n newest-cni-143133
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-143133 -n newest-cni-143133: exit status 2 (294.019442ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-143133 -n newest-cni-143133
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-143133 -n newest-cni-143133: exit status 2 (292.281843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-143133 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-143133 -n newest-cni-143133
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-143133 -n newest-cni-143133
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-826012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-826012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.626436663s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-826012 "pgrep -a kubelet"
I1205 21:23:17.471909  830381 config.go:182] Loaded profile config "auto-826012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-826012 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c7vq7" [a12e6397-1570-4bbd-befb-c028c8006653] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c7vq7" [a12e6397-1570-4bbd-befb-c028c8006653] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004448096s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-826012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-826012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-826012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jsgwg" [765c559a-19d5-42d8-99e8-6133964bed60] Running
E1205 21:23:37.212955  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:23:37.294537  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:23:37.456236  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:23:37.778249  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:23:38.420027  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:23:39.702326  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:23:42.264651  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003659644s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jsgwg" [765c559a-19d5-42d8-99e8-6133964bed60] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004792271s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-328335 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (47.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-826012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1205 21:23:47.385948  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-826012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (47.462434747s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (47.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-328335 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-328335 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-328335 -n no-preload-328335
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-328335 -n no-preload-328335: exit status 2 (314.695533ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-328335 -n no-preload-328335
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-328335 -n no-preload-328335: exit status 2 (310.735494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-328335 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-328335 -n no-preload-328335
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-328335 -n no-preload-328335
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-826012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1205 21:23:57.628061  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:24:13.219064  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:24:18.110520  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-826012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (49.167577539s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-cb8jj" [27bcd617-1691-4a47-a295-f912908d4e8a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004421177s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-826012 "pgrep -a kubelet"
I1205 21:24:38.972618  830381 config.go:182] Loaded profile config "kindnet-826012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-826012 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9t8lx" [d8729b35-1145-4b51-b28b-894d787fa682] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9t8lx" [d8729b35-1145-4b51-b28b-894d787fa682] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004481696s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dr4mr" [81bbab1f-9a87-4ad3-a871-847843623905] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004198611s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-826012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-826012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-826012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-826012 "pgrep -a kubelet"
I1205 21:24:49.904584  830381 config.go:182] Loaded profile config "flannel-826012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-826012 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jrvfq" [8e247c27-5fa0-4e2f-a7ee-ea7382cf290f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jrvfq" [8e247c27-5fa0-4e2f-a7ee-ea7382cf290f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003803322s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2qzpb" [8d3845f7-f16b-40bd-91ef-4ccc35d30ef0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004715816s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2qzpb" [8d3845f7-f16b-40bd-91ef-4ccc35d30ef0] Running
E1205 21:24:59.072144  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004482072s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-564053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-826012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-826012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-826012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-564053 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-564053 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-564053 -n embed-certs-564053
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-564053 -n embed-certs-564053: exit status 2 (329.933065ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-564053 -n embed-certs-564053
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-564053 -n embed-certs-564053: exit status 2 (315.374024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-564053 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-564053 -n embed-certs-564053
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-564053 -n embed-certs-564053
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (36.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-826012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-826012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (36.487580461s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (36.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-826012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-826012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (41.643527886s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (59.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-826012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-826012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (59.3217966s)
--- PASS: TestNetworkPlugins/group/calico/Start (59.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-826012 "pgrep -a kubelet"
I1205 21:25:45.476136  830381 config.go:182] Loaded profile config "enable-default-cni-826012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-826012 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mmnqc" [2d37acae-0442-434b-a0c8-a076b8ccc77d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mmnqc" [2d37acae-0442-434b-a0c8-a076b8ccc77d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003647851s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-826012 "pgrep -a kubelet"
I1205 21:25:51.993712  830381 config.go:182] Loaded profile config "bridge-826012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-826012 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ftl95" [c8de0989-e7dc-47a6-ace6-fe15c1f033ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ftl95" [c8de0989-e7dc-47a6-ace6-fe15c1f033ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004595329s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-826012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-826012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-826012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-826012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-826012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-826012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rbs54" [1e3fb224-17db-4080-995f-6b4def0bea85] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.116736394s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rbs54" [1e3fb224-17db-4080-995f-6b4def0bea85] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005165792s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-278063 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-826012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-826012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (52.060348832s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-278063 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-278063 --alsologtostderr -v=1
E1205 21:26:20.994322  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/old-k8s-version-540296/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-278063 --alsologtostderr -v=1: (1.106079254s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-278063 -n default-k8s-diff-port-278063
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-278063 -n default-k8s-diff-port-278063: exit status 2 (300.395172ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-278063 -n default-k8s-diff-port-278063
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-278063 -n default-k8s-diff-port-278063: exit status 2 (295.313978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-278063 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-278063 -n default-k8s-diff-port-278063
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-278063 -n default-k8s-diff-port-278063
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-kmg7q" [b909c846-c991-44be-9fcd-301be404a3b3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005427346s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-826012 "pgrep -a kubelet"
I1205 21:26:27.871473  830381 config.go:182] Loaded profile config "calico-826012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-826012 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tdjwb" [5b9eea21-1581-458d-8329-e9d663ad267f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tdjwb" [5b9eea21-1581-458d-8329-e9d663ad267f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004416167s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-826012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-826012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-826012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-826012 "pgrep -a kubelet"
I1205 21:27:09.187470  830381 config.go:182] Loaded profile config "custom-flannel-826012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-826012 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mn6w8" [d65b1efb-e797-42fe-b0ce-c2d002d3aa81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 21:27:09.967571  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/addons-583828/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-mn6w8" [d65b1efb-e797-42fe-b0ce-c2d002d3aa81] Running
E1205 21:27:16.285576  830381 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/functional-035676/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004030272s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-826012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-826012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-826012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    

Test skip (26/329)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.43s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-583828 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-510688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-510688
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-826012 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-826012

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-826012

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-826012

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-826012

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-826012

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-826012

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-826012

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-826012

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-826012

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-826012

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-826012

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-826012" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-826012" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20053-823623/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Dec 2024 21:15:31 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-684343
contexts:
- context:
cluster: kubernetes-upgrade-684343
user: kubernetes-upgrade-684343
name: kubernetes-upgrade-684343
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-684343
user:
client-certificate: /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/kubernetes-upgrade-684343/client.crt
client-key: /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/kubernetes-upgrade-684343/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-826012

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-826012"

                                                
                                                
----------------------- debugLogs end: kubenet-826012 [took: 3.395911592s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-826012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-826012
--- SKIP: TestNetworkPlugins/group/kubenet (3.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-826012 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-826012" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-826012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-826012" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20053-823623/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Dec 2024 21:15:31 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-684343
contexts:
- context:
cluster: kubernetes-upgrade-684343
user: kubernetes-upgrade-684343
name: kubernetes-upgrade-684343
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-684343
user:
client-certificate: /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/kubernetes-upgrade-684343/client.crt
client-key: /home/jenkins/minikube-integration/20053-823623/.minikube/profiles/kubernetes-upgrade-684343/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-826012

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-826012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826012"

                                                
                                                
----------------------- debugLogs end: cilium-826012 [took: 3.395969349s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-826012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-826012
--- SKIP: TestNetworkPlugins/group/cilium (3.55s)

                                                
                                    
Copied to clipboard