Test Report: Docker_Linux_crio 20720

                    
                      b7440dc9e9eb90138d871b2ff610c46584e06ed3:2025-05-10:39516
                    
                

Test fail (13/330)

x
+
TestAddons/parallel/Ingress (491.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-088134 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-088134 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-088134 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [670a8744-ab16-44a6-a1c9-0a18c96cf593] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:250: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:250: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-088134 -n addons-088134
addons_test.go:250: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-05-10 17:05:38.655726115 +0000 UTC m=+678.337513242
addons_test.go:250: (dbg) Run:  kubectl --context addons-088134 describe po nginx -n default
addons_test.go:250: (dbg) kubectl --context addons-088134 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-088134/192.168.49.2
Start Time:       Sat, 10 May 2025 16:57:38 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.28
IPs:
IP:  10.244.0.28
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vv759 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vv759:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  8m                    default-scheduler  Successfully assigned default/nginx to addons-088134
Warning  Failed     7m28s                 kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m40s                 kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    107s (x5 over 8m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     77s (x5 over 7m28s)   kubelet            Error: ErrImagePull
Warning  Failed     77s (x3 over 6m12s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    11s (x16 over 7m27s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     11s (x16 over 7m27s)  kubelet            Error: ImagePullBackOff
addons_test.go:250: (dbg) Run:  kubectl --context addons-088134 logs nginx -n default
addons_test.go:250: (dbg) Non-zero exit: kubectl --context addons-088134 logs nginx -n default: exit status 1 (65.911255ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:250: kubectl --context addons-088134 logs nginx -n default: exit status 1
addons_test.go:251: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-088134
helpers_test.go:235: (dbg) docker inspect addons-088134:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bde85e095a689bd54666e2402a657000d24a2c7705d3306e0ea57357e438c2b3",
	        "Created": "2025-05-10T16:54:55.051517583Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 731712,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-05-10T16:54:55.084728242Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e9e814e304601d171cd7a05fe946703c6fbd63c3e77415c5bcfe31c3cddbbe5f",
	        "ResolvConfPath": "/var/lib/docker/containers/bde85e095a689bd54666e2402a657000d24a2c7705d3306e0ea57357e438c2b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bde85e095a689bd54666e2402a657000d24a2c7705d3306e0ea57357e438c2b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/bde85e095a689bd54666e2402a657000d24a2c7705d3306e0ea57357e438c2b3/hosts",
	        "LogPath": "/var/lib/docker/containers/bde85e095a689bd54666e2402a657000d24a2c7705d3306e0ea57357e438c2b3/bde85e095a689bd54666e2402a657000d24a2c7705d3306e0ea57357e438c2b3-json.log",
	        "Name": "/addons-088134",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-088134:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-088134",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bde85e095a689bd54666e2402a657000d24a2c7705d3306e0ea57357e438c2b3",
	                "LowerDir": "/var/lib/docker/overlay2/8daff73cd2faa3faace2a48598424ad0928cc31ae480bc324069efa2cc2dc12e-init/diff:/var/lib/docker/overlay2/d562a19931b28d74981554e3e67ffc7804c8c483ec96f024e40ef2be1bf23f73/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8daff73cd2faa3faace2a48598424ad0928cc31ae480bc324069efa2cc2dc12e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8daff73cd2faa3faace2a48598424ad0928cc31ae480bc324069efa2cc2dc12e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8daff73cd2faa3faace2a48598424ad0928cc31ae480bc324069efa2cc2dc12e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-088134",
	                "Source": "/var/lib/docker/volumes/addons-088134/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-088134",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-088134",
	                "name.minikube.sigs.k8s.io": "addons-088134",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ccf4c159a1e3d7c14c6b2af2b0e83245ce1734e599b4a1db79a0723d9527d987",
	            "SandboxKey": "/var/run/docker/netns/ccf4c159a1e3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-088134": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:95:eb:e2:a0:e5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1451ebaebe192172eaaa1efea72c06a6f6dd3a306dcb7d4f5031305b008d7ead",
	                    "EndpointID": "209ae50c65ab2696f593be13dc9ae5cbe9e907be6254d2a0be92544909791911",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-088134",
	                        "bde85e095a68"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-088134 -n addons-088134
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-088134 logs -n 25: (1.160050239s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-029562                                                                     | download-only-029562   | jenkins | v1.35.0 | 10 May 25 16:54 UTC | 10 May 25 16:54 UTC |
	| delete  | -p download-only-184104                                                                     | download-only-184104   | jenkins | v1.35.0 | 10 May 25 16:54 UTC | 10 May 25 16:54 UTC |
	| start   | --download-only -p                                                                          | download-docker-238188 | jenkins | v1.35.0 | 10 May 25 16:54 UTC |                     |
	|         | download-docker-238188                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-238188                                                                   | download-docker-238188 | jenkins | v1.35.0 | 10 May 25 16:54 UTC | 10 May 25 16:54 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-854589   | jenkins | v1.35.0 | 10 May 25 16:54 UTC |                     |
	|         | binary-mirror-854589                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37525                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-854589                                                                     | binary-mirror-854589   | jenkins | v1.35.0 | 10 May 25 16:54 UTC | 10 May 25 16:54 UTC |
	| addons  | enable dashboard -p                                                                         | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:54 UTC |                     |
	|         | addons-088134                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:54 UTC |                     |
	|         | addons-088134                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-088134 --wait=true                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:54 UTC | 10 May 25 16:57 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-088134 addons disable                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-088134 addons disable                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | -p addons-088134                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-088134 addons                                                                        | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-088134 addons                                                                        | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-088134 addons disable                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-088134 addons disable                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-088134 addons disable                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-088134 ip                                                                            | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	| addons  | addons-088134 addons disable                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-088134 ssh cat                                                                       | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | /opt/local-path-provisioner/pvc-d21bcf7d-7863-46d1-95c2-f7795a677260_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-088134 addons disable                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:58 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-088134 addons                                                                        | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-088134 addons                                                                        | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-088134 addons                                                                        | addons-088134          | jenkins | v1.35.0 | 10 May 25 17:03 UTC | 10 May 25 17:03 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-088134 addons                                                                        | addons-088134          | jenkins | v1.35.0 | 10 May 25 17:03 UTC | 10 May 25 17:04 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 16:54:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 16:54:33.602414  731104 out.go:345] Setting OutFile to fd 1 ...
	I0510 16:54:33.602878  731104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 16:54:33.602892  731104 out.go:358] Setting ErrFile to fd 2...
	I0510 16:54:33.602899  731104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 16:54:33.603213  731104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 16:54:33.603888  731104 out.go:352] Setting JSON to false
	I0510 16:54:33.604776  731104 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9421,"bootTime":1746886653,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 16:54:33.604884  731104 start.go:140] virtualization: kvm guest
	I0510 16:54:33.607067  731104 out.go:177] * [addons-088134] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 16:54:33.608426  731104 notify.go:220] Checking for updates...
	I0510 16:54:33.608457  731104 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 16:54:33.609549  731104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 16:54:33.610937  731104 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 16:54:33.612286  731104 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 16:54:33.613635  731104 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 16:54:33.615012  731104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 16:54:33.616496  731104 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 16:54:33.639029  731104 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 16:54:33.639115  731104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 16:54:33.687784  731104 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:44 SystemTime:2025-05-10 16:54:33.678668893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 16:54:33.687895  731104 docker.go:318] overlay module found
	I0510 16:54:33.689693  731104 out.go:177] * Using the docker driver based on user configuration
	I0510 16:54:33.690995  731104 start.go:304] selected driver: docker
	I0510 16:54:33.691011  731104 start.go:908] validating driver "docker" against <nil>
	I0510 16:54:33.691026  731104 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 16:54:33.692047  731104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 16:54:33.740934  731104 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:44 SystemTime:2025-05-10 16:54:33.732159464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 16:54:33.741185  731104 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0510 16:54:33.741458  731104 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 16:54:33.743486  731104 out.go:177] * Using Docker driver with root privileges
	I0510 16:54:33.744623  731104 cni.go:84] Creating CNI manager for ""
	I0510 16:54:33.744703  731104 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 16:54:33.744718  731104 start_flags.go:320] Found "CNI" CNI - setting NetworkPlugin=cni
	I0510 16:54:33.744826  731104 start.go:347] cluster config:
	{Name:addons-088134 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:addons-088134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 16:54:33.747109  731104 out.go:177] * Starting "addons-088134" primary control-plane node in "addons-088134" cluster
	I0510 16:54:33.748302  731104 cache.go:121] Beginning downloading kic base image for docker with crio
	I0510 16:54:33.749589  731104 out.go:177] * Pulling base image v0.0.46-1746731792-20718 ...
	I0510 16:54:33.750647  731104 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 16:54:33.750687  731104 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 16:54:33.750697  731104 cache.go:56] Caching tarball of preloaded images
	I0510 16:54:33.750756  731104 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local docker daemon
	I0510 16:54:33.750797  731104 preload.go:172] Found /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 16:54:33.750806  731104 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 16:54:33.751171  731104 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/config.json ...
	I0510 16:54:33.751199  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/config.json: {Name:mk8b2b968bcd8f9e3aea76561f259d04a50289d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:33.766962  731104 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 to local cache
	I0510 16:54:33.767103  731104 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local cache directory
	I0510 16:54:33.767122  731104 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local cache directory, skipping pull
	I0510 16:54:33.767126  731104 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 exists in cache, skipping pull
	I0510 16:54:33.767134  731104 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 as a tarball
	I0510 16:54:33.767142  731104 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 from local cache
	I0510 16:54:45.499105  731104 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 from cached tarball
	I0510 16:54:45.499173  731104 cache.go:230] Successfully downloaded all kic artifacts
	I0510 16:54:45.499243  731104 start.go:360] acquireMachinesLock for addons-088134: {Name:mk070a6c546592528f175388e4fddc516de6c3e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 16:54:45.499362  731104 start.go:364] duration metric: took 91.567µs to acquireMachinesLock for "addons-088134"
	I0510 16:54:45.499404  731104 start.go:93] Provisioning new machine with config: &{Name:addons-088134 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:addons-088134 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 16:54:45.499528  731104 start.go:125] createHost starting for "" (driver="docker")
	I0510 16:54:45.501519  731104 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0510 16:54:45.501774  731104 start.go:159] libmachine.API.Create for "addons-088134" (driver="docker")
	I0510 16:54:45.501809  731104 client.go:168] LocalClient.Create starting
	I0510 16:54:45.501943  731104 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem
	I0510 16:54:46.313526  731104 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem
	I0510 16:54:46.401998  731104 cli_runner.go:164] Run: docker network inspect addons-088134 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0510 16:54:46.417857  731104 cli_runner.go:211] docker network inspect addons-088134 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0510 16:54:46.417926  731104 network_create.go:284] running [docker network inspect addons-088134] to gather additional debugging logs...
	I0510 16:54:46.417955  731104 cli_runner.go:164] Run: docker network inspect addons-088134
	W0510 16:54:46.433376  731104 cli_runner.go:211] docker network inspect addons-088134 returned with exit code 1
	I0510 16:54:46.433407  731104 network_create.go:287] error running [docker network inspect addons-088134]: docker network inspect addons-088134: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-088134 not found
	I0510 16:54:46.433420  731104 network_create.go:289] output of [docker network inspect addons-088134]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-088134 not found
	
	** /stderr **
	I0510 16:54:46.433540  731104 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0510 16:54:46.450034  731104 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d280d0}
	I0510 16:54:46.450092  731104 network_create.go:124] attempt to create docker network addons-088134 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0510 16:54:46.450147  731104 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-088134 addons-088134
	I0510 16:54:46.501191  731104 network_create.go:108] docker network addons-088134 192.168.49.0/24 created
	I0510 16:54:46.501226  731104 kic.go:121] calculated static IP "192.168.49.2" for the "addons-088134" container
	I0510 16:54:46.501312  731104 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0510 16:54:46.517151  731104 cli_runner.go:164] Run: docker volume create addons-088134 --label name.minikube.sigs.k8s.io=addons-088134 --label created_by.minikube.sigs.k8s.io=true
	I0510 16:54:46.535023  731104 oci.go:103] Successfully created a docker volume addons-088134
	I0510 16:54:46.535114  731104 cli_runner.go:164] Run: docker run --rm --name addons-088134-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-088134 --entrypoint /usr/bin/test -v addons-088134:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 -d /var/lib
	I0510 16:54:50.397117  731104 cli_runner.go:217] Completed: docker run --rm --name addons-088134-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-088134 --entrypoint /usr/bin/test -v addons-088134:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 -d /var/lib: (3.86194859s)
	I0510 16:54:50.397155  731104 oci.go:107] Successfully prepared a docker volume addons-088134
	I0510 16:54:50.397190  731104 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 16:54:50.397219  731104 kic.go:194] Starting extracting preloaded images to volume ...
	I0510 16:54:50.397299  731104 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-088134:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 -I lz4 -xf /preloaded.tar -C /extractDir
	I0510 16:54:54.988647  731104 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-088134:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 -I lz4 -xf /preloaded.tar -C /extractDir: (4.591299795s)
	I0510 16:54:54.988683  731104 kic.go:203] duration metric: took 4.591460681s to extract preloaded images to volume ...
	W0510 16:54:54.988811  731104 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0510 16:54:54.988909  731104 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0510 16:54:55.036443  731104 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-088134 --name addons-088134 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-088134 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-088134 --network addons-088134 --ip 192.168.49.2 --volume addons-088134:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155
	I0510 16:54:55.322844  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Running}}
	I0510 16:54:55.339984  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:54:55.357949  731104 cli_runner.go:164] Run: docker exec addons-088134 stat /var/lib/dpkg/alternatives/iptables
	I0510 16:54:55.398886  731104 oci.go:144] the created container "addons-088134" has a running status.
	I0510 16:54:55.398921  731104 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa...
	I0510 16:54:55.614482  731104 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0510 16:54:55.635933  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:54:55.653893  731104 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0510 16:54:55.653915  731104 kic_runner.go:114] Args: [docker exec --privileged addons-088134 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0510 16:54:55.755798  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:54:55.775111  731104 machine.go:93] provisionDockerMachine start ...
	I0510 16:54:55.775216  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:55.798837  731104 main.go:141] libmachine: Using SSH client type: native
	I0510 16:54:55.799123  731104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0510 16:54:55.799141  731104 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 16:54:55.998994  731104 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-088134
	
	I0510 16:54:55.999031  731104 ubuntu.go:169] provisioning hostname "addons-088134"
	I0510 16:54:55.999090  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:56.018776  731104 main.go:141] libmachine: Using SSH client type: native
	I0510 16:54:56.019092  731104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0510 16:54:56.019120  731104 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-088134 && echo "addons-088134" | sudo tee /etc/hostname
	I0510 16:54:56.151604  731104 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-088134
	
	I0510 16:54:56.151702  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:56.169304  731104 main.go:141] libmachine: Using SSH client type: native
	I0510 16:54:56.169593  731104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0510 16:54:56.169620  731104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-088134' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-088134/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-088134' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 16:54:56.287709  731104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 16:54:56.287744  731104 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20720-722920/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-722920/.minikube}
	I0510 16:54:56.287781  731104 ubuntu.go:177] setting up certificates
	I0510 16:54:56.287797  731104 provision.go:84] configureAuth start
	I0510 16:54:56.287867  731104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-088134
	I0510 16:54:56.304724  731104 provision.go:143] copyHostCerts
	I0510 16:54:56.304824  731104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem (1123 bytes)
	I0510 16:54:56.304977  731104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem (1675 bytes)
	I0510 16:54:56.305071  731104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem (1078 bytes)
	I0510 16:54:56.305148  731104 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem org=jenkins.addons-088134 san=[127.0.0.1 192.168.49.2 addons-088134 localhost minikube]
	I0510 16:54:56.486900  731104 provision.go:177] copyRemoteCerts
	I0510 16:54:56.486976  731104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 16:54:56.487025  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:56.504796  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:54:56.592491  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 16:54:56.615042  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0510 16:54:56.637370  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 16:54:56.659450  731104 provision.go:87] duration metric: took 371.601114ms to configureAuth
	I0510 16:54:56.659485  731104 ubuntu.go:193] setting minikube options for container-runtime
	I0510 16:54:56.659679  731104 config.go:182] Loaded profile config "addons-088134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 16:54:56.659800  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:56.677955  731104 main.go:141] libmachine: Using SSH client type: native
	I0510 16:54:56.678174  731104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0510 16:54:56.678193  731104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 16:54:56.884502  731104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 16:54:56.884534  731104 machine.go:96] duration metric: took 1.109396091s to provisionDockerMachine
	I0510 16:54:56.884549  731104 client.go:171] duration metric: took 11.382729697s to LocalClient.Create
	I0510 16:54:56.884566  731104 start.go:167] duration metric: took 11.382793539s to libmachine.API.Create "addons-088134"
	I0510 16:54:56.884574  731104 start.go:293] postStartSetup for "addons-088134" (driver="docker")
	I0510 16:54:56.884584  731104 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 16:54:56.884641  731104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 16:54:56.884676  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:56.901866  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:54:56.993014  731104 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 16:54:56.996361  731104 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0510 16:54:56.996396  731104 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0510 16:54:56.996403  731104 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0510 16:54:56.996411  731104 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0510 16:54:56.996423  731104 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/addons for local assets ...
	I0510 16:54:56.996482  731104 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/files for local assets ...
	I0510 16:54:56.996505  731104 start.go:296] duration metric: took 111.925893ms for postStartSetup
	I0510 16:54:56.996830  731104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-088134
	I0510 16:54:57.013547  731104 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/config.json ...
	I0510 16:54:57.013809  731104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 16:54:57.013863  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:57.030683  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:54:57.116461  731104 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0510 16:54:57.120862  731104 start.go:128] duration metric: took 11.621310165s to createHost
	I0510 16:54:57.120892  731104 start.go:83] releasing machines lock for "addons-088134", held for 11.621515367s
	I0510 16:54:57.120956  731104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-088134
	I0510 16:54:57.138657  731104 ssh_runner.go:195] Run: cat /version.json
	I0510 16:54:57.138695  731104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 16:54:57.138710  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:57.138781  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:57.156019  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:54:57.156292  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:54:57.312331  731104 ssh_runner.go:195] Run: systemctl --version
	I0510 16:54:57.316881  731104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 16:54:57.454671  731104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0510 16:54:57.459098  731104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 16:54:57.477434  731104 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0510 16:54:57.477523  731104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 16:54:57.504603  731104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0510 16:54:57.504624  731104 start.go:495] detecting cgroup driver to use...
	I0510 16:54:57.504657  731104 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0510 16:54:57.504707  731104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 16:54:57.519798  731104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 16:54:57.530382  731104 docker.go:225] disabling cri-docker service (if available) ...
	I0510 16:54:57.530440  731104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 16:54:57.543133  731104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 16:54:57.556522  731104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 16:54:57.633473  731104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 16:54:57.714492  731104 docker.go:241] disabling docker service ...
	I0510 16:54:57.714563  731104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 16:54:57.733118  731104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 16:54:57.743768  731104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 16:54:57.825920  731104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 16:54:57.910593  731104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 16:54:57.921432  731104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 16:54:57.936422  731104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 16:54:57.936476  731104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 16:54:57.945569  731104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 16:54:57.945642  731104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 16:54:57.954654  731104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 16:54:57.963785  731104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 16:54:57.972779  731104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 16:54:57.981140  731104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 16:54:57.990026  731104 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 16:54:58.004801  731104 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 16:54:58.013835  731104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 16:54:58.022070  731104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 16:54:58.029832  731104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 16:54:58.104384  731104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 16:54:58.214495  731104 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 16:54:58.214593  731104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 16:54:58.218036  731104 start.go:563] Will wait 60s for crictl version
	I0510 16:54:58.218095  731104 ssh_runner.go:195] Run: which crictl
	I0510 16:54:58.221492  731104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 16:54:58.256903  731104 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0510 16:54:58.257003  731104 ssh_runner.go:195] Run: crio --version
	I0510 16:54:58.293778  731104 ssh_runner.go:195] Run: crio --version
	I0510 16:54:58.329347  731104 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.24.6 ...
	I0510 16:54:58.330515  731104 cli_runner.go:164] Run: docker network inspect addons-088134 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0510 16:54:58.346693  731104 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0510 16:54:58.350407  731104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 16:54:58.360985  731104 kubeadm.go:875] updating cluster {Name:addons-088134 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:addons-088134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 16:54:58.361098  731104 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 16:54:58.361139  731104 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 16:54:58.423733  731104 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 16:54:58.423757  731104 crio.go:433] Images already preloaded, skipping extraction
	I0510 16:54:58.423815  731104 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 16:54:58.456638  731104 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 16:54:58.456660  731104 cache_images.go:84] Images are preloaded, skipping loading
	I0510 16:54:58.456670  731104 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.33.0 crio true true} ...
	I0510 16:54:58.456782  731104 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-088134 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:addons-088134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 16:54:58.456844  731104 ssh_runner.go:195] Run: crio config
	I0510 16:54:58.499495  731104 cni.go:84] Creating CNI manager for ""
	I0510 16:54:58.499519  731104 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 16:54:58.499532  731104 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 16:54:58.499555  731104 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-088134 NodeName:addons-088134 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 16:54:58.499675  731104 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-088134"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 16:54:58.499738  731104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 16:54:58.508314  731104 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 16:54:58.508384  731104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 16:54:58.516925  731104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0510 16:54:58.533458  731104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 16:54:58.549799  731104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0510 16:54:58.566144  731104 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0510 16:54:58.569436  731104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 16:54:58.579596  731104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 16:54:58.653298  731104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 16:54:58.665995  731104 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134 for IP: 192.168.49.2
	I0510 16:54:58.666019  731104 certs.go:194] generating shared ca certs ...
	I0510 16:54:58.666049  731104 certs.go:226] acquiring lock for ca certs: {Name:mk27922925b9822e089551ad68cc2984cd622bc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:58.666196  731104 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key
	I0510 16:54:58.875877  731104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt ...
	I0510 16:54:58.875913  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt: {Name:mk058140c8b275beb4e709bae4cf0b29ea3c1643 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:58.876129  731104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key ...
	I0510 16:54:58.876147  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key: {Name:mk089d1de06bb5005a6634bbdb0baf0d9fcc36f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:58.876258  731104 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key
	I0510 16:54:59.404697  731104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.crt ...
	I0510 16:54:59.404730  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.crt: {Name:mk37496ac2715c4b2c8e1aa8497c599fc431e991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:59.404930  731104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key ...
	I0510 16:54:59.404946  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key: {Name:mkb49d83aaf0d3fdf9d7bd45fb3792a7571b2813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:59.405049  731104 certs.go:256] generating profile certs ...
	I0510 16:54:59.405113  731104 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.key
	I0510 16:54:59.405127  731104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt with IP's: []
	I0510 16:54:59.432971  731104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt ...
	I0510 16:54:59.433002  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: {Name:mkdfebd11f87ceef8a84d71d85397bcb519642fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:59.433157  731104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.key ...
	I0510 16:54:59.433169  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.key: {Name:mk21445d8e5e4bd7fb61273d95b0c609006fbbbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:59.433237  731104 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.key.a5670b41
	I0510 16:54:59.433255  731104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.crt.a5670b41 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0510 16:55:00.101378  731104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.crt.a5670b41 ...
	I0510 16:55:00.101415  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.crt.a5670b41: {Name:mkebf309fe9c46c35d1c831ef7e73fe547760fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:55:00.101599  731104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.key.a5670b41 ...
	I0510 16:55:00.101624  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.key.a5670b41: {Name:mke290ab11de1b175dfe7c41149e6881dcd536fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:55:00.101700  731104 certs.go:381] copying /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.crt.a5670b41 -> /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.crt
	I0510 16:55:00.101779  731104 certs.go:385] copying /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.key.a5670b41 -> /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.key
	I0510 16:55:00.101827  731104 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.key
	I0510 16:55:00.101851  731104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.crt with IP's: []
	I0510 16:55:00.363898  731104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.crt ...
	I0510 16:55:00.363940  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.crt: {Name:mk957fcdf29ae7c595de720ac14532ca70e2807a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:55:00.364115  731104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.key ...
	I0510 16:55:00.364130  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.key: {Name:mk7f0542185dfffaf9832a3d9b880ca12a5ed240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:55:00.364301  731104 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 16:55:00.364337  731104 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem (1078 bytes)
	I0510 16:55:00.364365  731104 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem (1123 bytes)
	I0510 16:55:00.364388  731104 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem (1675 bytes)
	I0510 16:55:00.365123  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 16:55:00.388385  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0510 16:55:00.411033  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 16:55:00.433124  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0510 16:55:00.455370  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0510 16:55:00.477264  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 16:55:00.499371  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 16:55:00.521583  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0510 16:55:00.543430  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 16:55:00.565317  731104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 16:55:00.581640  731104 ssh_runner.go:195] Run: openssl version
	I0510 16:55:00.587196  731104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 16:55:00.596069  731104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 16:55:00.599525  731104 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 16:54 /usr/share/ca-certificates/minikubeCA.pem
	I0510 16:55:00.599582  731104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 16:55:00.606273  731104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 16:55:00.614976  731104 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 16:55:00.617997  731104 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0510 16:55:00.618057  731104 kubeadm.go:392] StartCluster: {Name:addons-088134 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:addons-088134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 16:55:00.618145  731104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 16:55:00.618189  731104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 16:55:00.651684  731104 cri.go:89] found id: ""
	I0510 16:55:00.651766  731104 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 16:55:00.660318  731104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 16:55:00.668494  731104 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0510 16:55:00.668565  731104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 16:55:00.677083  731104 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 16:55:00.677105  731104 kubeadm.go:157] found existing configuration files:
	
	I0510 16:55:00.677156  731104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 16:55:00.685265  731104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 16:55:00.685337  731104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 16:55:00.693157  731104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 16:55:00.700925  731104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 16:55:00.700977  731104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 16:55:00.708768  731104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 16:55:00.716607  731104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 16:55:00.716671  731104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 16:55:00.724554  731104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 16:55:00.732612  731104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 16:55:00.732667  731104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 16:55:00.740335  731104 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0510 16:55:00.776040  731104 kubeadm.go:310] [init] Using Kubernetes version: v1.33.0
	I0510 16:55:00.776115  731104 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 16:55:00.794687  731104 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0510 16:55:00.794806  731104 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1081-gcp
	I0510 16:55:00.794875  731104 kubeadm.go:310] OS: Linux
	I0510 16:55:00.794954  731104 kubeadm.go:310] CGROUPS_CPU: enabled
	I0510 16:55:00.795042  731104 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0510 16:55:00.795115  731104 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0510 16:55:00.795158  731104 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0510 16:55:00.795201  731104 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0510 16:55:00.795243  731104 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0510 16:55:00.795306  731104 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0510 16:55:00.795374  731104 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0510 16:55:00.795447  731104 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0510 16:55:00.849233  731104 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 16:55:00.849414  731104 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 16:55:00.849552  731104 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0510 16:55:00.856905  731104 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 16:55:00.860444  731104 out.go:235]   - Generating certificates and keys ...
	I0510 16:55:00.860589  731104 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 16:55:00.860684  731104 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 16:55:00.926590  731104 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0510 16:55:01.184213  731104 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0510 16:55:01.236251  731104 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0510 16:55:01.781902  731104 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0510 16:55:02.158552  731104 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0510 16:55:02.158687  731104 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-088134 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0510 16:55:02.453692  731104 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0510 16:55:02.453878  731104 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-088134 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0510 16:55:02.558010  731104 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0510 16:55:03.081854  731104 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0510 16:55:03.250515  731104 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0510 16:55:03.250663  731104 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 16:55:03.599764  731104 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 16:55:03.615264  731104 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0510 16:55:04.111684  731104 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 16:55:04.271522  731104 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 16:55:04.857498  731104 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 16:55:04.858004  731104 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 16:55:04.860122  731104 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 16:55:04.862371  731104 out.go:235]   - Booting up control plane ...
	I0510 16:55:04.862482  731104 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 16:55:04.862602  731104 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 16:55:04.862666  731104 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 16:55:04.871557  731104 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 16:55:04.876804  731104 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 16:55:04.876877  731104 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 16:55:04.954915  731104 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0510 16:55:04.955106  731104 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0510 16:55:05.956672  731104 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001907371s
	I0510 16:55:05.960854  731104 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0510 16:55:05.960980  731104 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0510 16:55:05.961103  731104 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0510 16:55:05.961192  731104 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0510 16:55:08.245996  731104 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.285048452s
	I0510 16:55:09.059145  731104 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.098287186s
	I0510 16:55:10.462702  731104 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.501750047s
	I0510 16:55:10.474885  731104 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0510 16:55:10.484434  731104 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0510 16:55:10.503639  731104 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0510 16:55:10.503931  731104 kubeadm.go:310] [mark-control-plane] Marking the node addons-088134 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0510 16:55:10.512234  731104 kubeadm.go:310] [bootstrap-token] Using token: ngtmmz.nuzx3d2w9dfre1k4
	I0510 16:55:10.513714  731104 out.go:235]   - Configuring RBAC rules ...
	I0510 16:55:10.513877  731104 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0510 16:55:10.517540  731104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0510 16:55:10.525111  731104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0510 16:55:10.527449  731104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0510 16:55:10.530125  731104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0510 16:55:10.532559  731104 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0510 16:55:10.869284  731104 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0510 16:55:11.287324  731104 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0510 16:55:11.868144  731104 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0510 16:55:11.868992  731104 kubeadm.go:310] 
	I0510 16:55:11.869077  731104 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0510 16:55:11.869087  731104 kubeadm.go:310] 
	I0510 16:55:11.869186  731104 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0510 16:55:11.869195  731104 kubeadm.go:310] 
	I0510 16:55:11.869225  731104 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0510 16:55:11.869303  731104 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0510 16:55:11.869366  731104 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0510 16:55:11.869407  731104 kubeadm.go:310] 
	I0510 16:55:11.869496  731104 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0510 16:55:11.869507  731104 kubeadm.go:310] 
	I0510 16:55:11.869564  731104 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0510 16:55:11.869575  731104 kubeadm.go:310] 
	I0510 16:55:11.869646  731104 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0510 16:55:11.869742  731104 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0510 16:55:11.869839  731104 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0510 16:55:11.869848  731104 kubeadm.go:310] 
	I0510 16:55:11.869953  731104 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0510 16:55:11.870052  731104 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0510 16:55:11.870061  731104 kubeadm.go:310] 
	I0510 16:55:11.870159  731104 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ngtmmz.nuzx3d2w9dfre1k4 \
	I0510 16:55:11.870297  731104 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cab2ae3dd65908c2d6393ff2fdde0e4e0dbad5e0ec941434a6c816c7eedead32 \
	I0510 16:55:11.870333  731104 kubeadm.go:310] 	--control-plane 
	I0510 16:55:11.870343  731104 kubeadm.go:310] 
	I0510 16:55:11.870433  731104 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0510 16:55:11.870451  731104 kubeadm.go:310] 
	I0510 16:55:11.870534  731104 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ngtmmz.nuzx3d2w9dfre1k4 \
	I0510 16:55:11.870650  731104 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cab2ae3dd65908c2d6393ff2fdde0e4e0dbad5e0ec941434a6c816c7eedead32 
	I0510 16:55:11.872926  731104 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0510 16:55:11.873136  731104 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1081-gcp\n", err: exit status 1
	I0510 16:55:11.873232  731104 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 16:55:11.873273  731104 cni.go:84] Creating CNI manager for ""
	I0510 16:55:11.873296  731104 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 16:55:11.874878  731104 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0510 16:55:11.876071  731104 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0510 16:55:11.879859  731104 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.33.0/kubectl ...
	I0510 16:55:11.879879  731104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0510 16:55:11.896739  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0510 16:55:12.096196  731104 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0510 16:55:12.096311  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:12.096339  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-088134 minikube.k8s.io/updated_at=2025_05_10T16_55_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4 minikube.k8s.io/name=addons-088134 minikube.k8s.io/primary=true
	I0510 16:55:12.257891  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:12.258014  731104 ops.go:34] apiserver oom_adj: -16
	I0510 16:55:12.758409  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:13.258331  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:13.758876  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:14.258197  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:14.758683  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:15.258410  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:15.758026  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:16.258119  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:16.758231  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:16.825494  731104 kubeadm.go:1105] duration metric: took 4.729247498s to wait for elevateKubeSystemPrivileges
	I0510 16:55:16.825539  731104 kubeadm.go:394] duration metric: took 16.207488619s to StartCluster
	I0510 16:55:16.825578  731104 settings.go:142] acquiring lock: {Name:mkb5ef074e3901ac961cf1a29314fa6c725c1890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:55:16.825744  731104 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 16:55:16.826244  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/kubeconfig: {Name:mk9fb87a04495b85d7d2d831cf7e181b64e065fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:55:16.826504  731104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0510 16:55:16.826499  731104 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 16:55:16.826542  731104 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0510 16:55:16.826657  731104 addons.go:69] Setting yakd=true in profile "addons-088134"
	I0510 16:55:16.826678  731104 addons.go:238] Setting addon yakd=true in "addons-088134"
	I0510 16:55:16.826692  731104 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-088134"
	I0510 16:55:16.826714  731104 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-088134"
	I0510 16:55:16.826724  731104 addons.go:69] Setting metrics-server=true in profile "addons-088134"
	I0510 16:55:16.826724  731104 config.go:182] Loaded profile config "addons-088134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 16:55:16.826751  731104 addons.go:238] Setting addon metrics-server=true in "addons-088134"
	I0510 16:55:16.826760  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.826775  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.826785  731104 addons.go:69] Setting storage-provisioner=true in profile "addons-088134"
	I0510 16:55:16.826800  731104 addons.go:238] Setting addon storage-provisioner=true in "addons-088134"
	I0510 16:55:16.826830  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.827021  731104 addons.go:69] Setting volcano=true in profile "addons-088134"
	I0510 16:55:16.827073  731104 addons.go:238] Setting addon volcano=true in "addons-088134"
	I0510 16:55:16.827113  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.827185  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.827274  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.827286  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.827494  731104 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-088134"
	I0510 16:55:16.827525  731104 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-088134"
	I0510 16:55:16.827536  731104 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-088134"
	I0510 16:55:16.827553  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.827561  731104 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-088134"
	I0510 16:55:16.827626  731104 addons.go:69] Setting registry=true in profile "addons-088134"
	I0510 16:55:16.827659  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.827651  731104 addons.go:238] Setting addon registry=true in "addons-088134"
	I0510 16:55:16.827691  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.827848  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.827994  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.827985  731104 addons.go:69] Setting volumesnapshots=true in profile "addons-088134"
	I0510 16:55:16.828012  731104 addons.go:238] Setting addon volumesnapshots=true in "addons-088134"
	I0510 16:55:16.828044  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.828229  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.828471  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.826714  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.829075  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.829154  731104 addons.go:69] Setting ingress=true in profile "addons-088134"
	I0510 16:55:16.829178  731104 addons.go:238] Setting addon ingress=true in "addons-088134"
	I0510 16:55:16.829339  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.831536  731104 addons.go:69] Setting gcp-auth=true in profile "addons-088134"
	I0510 16:55:16.831960  731104 mustload.go:65] Loading cluster: addons-088134
	I0510 16:55:16.832187  731104 config.go:182] Loaded profile config "addons-088134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 16:55:16.832969  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.833496  731104 out.go:177] * Verifying Kubernetes components...
	I0510 16:55:16.833548  731104 addons.go:69] Setting ingress-dns=true in profile "addons-088134"
	I0510 16:55:16.833595  731104 addons.go:238] Setting addon ingress-dns=true in "addons-088134"
	I0510 16:55:16.833646  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.835281  731104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 16:55:16.829084  731104 addons.go:69] Setting default-storageclass=true in profile "addons-088134"
	I0510 16:55:16.838706  731104 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-088134"
	I0510 16:55:16.839118  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.829097  731104 addons.go:69] Setting cloud-spanner=true in profile "addons-088134"
	I0510 16:55:16.829107  731104 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-088134"
	I0510 16:55:16.839216  731104 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-088134"
	I0510 16:55:16.839267  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.839738  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.839946  731104 addons.go:238] Setting addon cloud-spanner=true in "addons-088134"
	I0510 16:55:16.839997  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.826675  731104 addons.go:69] Setting inspektor-gadget=true in profile "addons-088134"
	I0510 16:55:16.840429  731104 addons.go:238] Setting addon inspektor-gadget=true in "addons-088134"
	I0510 16:55:16.840470  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.852208  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.852866  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.852930  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.854375  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.863036  731104 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 16:55:16.864562  731104 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 16:55:16.864588  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0510 16:55:16.864663  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.877554  731104 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.1
	I0510 16:55:16.879102  731104 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0510 16:55:16.879127  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0510 16:55:16.879194  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.881290  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0510 16:55:16.882489  731104 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0510 16:55:16.882515  731104 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0510 16:55:16.882603  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.886384  731104 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0510 16:55:16.887782  731104 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0510 16:55:16.887808  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0510 16:55:16.887875  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	W0510 16:55:16.893100  731104 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0510 16:55:16.899547  731104 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0510 16:55:16.900892  731104 addons.go:238] Setting addon default-storageclass=true in "addons-088134"
	I0510 16:55:16.900947  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.900958  731104 out.go:177]   - Using image docker.io/registry:3.0.0
	I0510 16:55:16.901404  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.902382  731104 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0510 16:55:16.902413  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0510 16:55:16.902468  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.904344  731104 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-088134"
	I0510 16:55:16.904438  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.904976  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.911787  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.912677  731104 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0510 16:55:16.914101  731104 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0510 16:55:16.914121  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0510 16:55:16.914178  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.920248  731104 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.39.0
	I0510 16:55:16.921656  731104 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0510 16:55:16.921682  731104 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0510 16:55:16.921759  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.929016  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0510 16:55:16.929170  731104 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0510 16:55:16.930198  731104 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0510 16:55:16.930218  731104 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0510 16:55:16.930293  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.931849  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0510 16:55:16.934178  731104 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0510 16:55:16.935098  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0510 16:55:16.935460  731104 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0510 16:55:16.935487  731104 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0510 16:55:16.935623  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.937330  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:16.937750  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0510 16:55:16.939221  731104 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0510 16:55:16.940318  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0510 16:55:16.941169  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:16.941379  731104 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.33
	I0510 16:55:16.941379  731104 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0510 16:55:16.942574  731104 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0510 16:55:16.942593  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0510 16:55:16.942650  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.943383  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0510 16:55:16.943805  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:16.950701  731104 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0510 16:55:16.953247  731104 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0510 16:55:16.953268  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0510 16:55:16.953401  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.960409  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0510 16:55:16.961805  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0510 16:55:16.962800  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:16.964552  731104 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0510 16:55:16.964573  731104 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0510 16:55:16.964640  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.969397  731104 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0510 16:55:16.969426  731104 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0510 16:55:16.969490  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.976404  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:16.979521  731104 out.go:177]   - Using image docker.io/busybox:stable
	I0510 16:55:16.983120  731104 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0510 16:55:16.984026  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:16.985849  731104 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0510 16:55:16.985870  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0510 16:55:16.985937  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.988989  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:16.992937  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:17.006373  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:17.007664  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:17.009414  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:17.010031  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:17.011853  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:17.012061  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:17.157907  731104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0510 16:55:17.351306  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 16:55:17.352546  731104 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0510 16:55:17.352623  731104 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0510 16:55:17.362670  731104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 16:55:17.445441  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0510 16:55:17.447281  731104 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0510 16:55:17.447311  731104 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0510 16:55:17.455710  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0510 16:55:17.547883  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0510 16:55:17.644008  731104 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0510 16:55:17.644036  731104 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0510 16:55:17.648806  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0510 16:55:17.652823  731104 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0510 16:55:17.652897  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0510 16:55:17.660326  731104 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0510 16:55:17.660408  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0510 16:55:17.660669  731104 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0510 16:55:17.660724  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0510 16:55:17.662861  731104 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0510 16:55:17.662918  731104 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0510 16:55:17.666875  731104 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0510 16:55:17.666952  731104 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0510 16:55:17.746071  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0510 16:55:17.865629  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0510 16:55:17.944540  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0510 16:55:17.958611  731104 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0510 16:55:17.958640  731104 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0510 16:55:17.967241  731104 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0510 16:55:17.967329  731104 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0510 16:55:18.044082  731104 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0510 16:55:18.044168  731104 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0510 16:55:18.049955  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0510 16:55:18.054187  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0510 16:55:18.061195  731104 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0510 16:55:18.061229  731104 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0510 16:55:18.358854  731104 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0510 16:55:18.358955  731104 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0510 16:55:18.550305  731104 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 16:55:18.550337  731104 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0510 16:55:18.765171  731104 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0510 16:55:18.765260  731104 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0510 16:55:18.845747  731104 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0510 16:55:18.845785  731104 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0510 16:55:18.862941  731104 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0510 16:55:18.862969  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0510 16:55:19.047880  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 16:55:19.149161  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0510 16:55:19.248070  731104 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0510 16:55:19.248157  731104 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0510 16:55:19.258602  731104 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0510 16:55:19.258703  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0510 16:55:19.354527  731104 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.196560823s)
	I0510 16:55:19.354657  731104 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0510 16:55:19.451834  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0510 16:55:19.745338  731104 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0510 16:55:19.745381  731104 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0510 16:55:19.962402  731104 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0510 16:55:19.962433  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0510 16:55:20.144953  731104 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-088134" context rescaled to 1 replicas
	I0510 16:55:20.151893  731104 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0510 16:55:20.151997  731104 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0510 16:55:20.345483  731104 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0510 16:55:20.345590  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0510 16:55:20.547215  731104 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0510 16:55:20.547243  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0510 16:55:20.657266  731104 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0510 16:55:20.657367  731104 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0510 16:55:20.867351  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0510 16:55:21.665349  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.313945073s)
	I0510 16:55:21.665403  731104 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.302691389s)
	I0510 16:55:21.665439  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.21988542s)
	I0510 16:55:21.665469  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.209736384s)
	I0510 16:55:21.665604  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.117647518s)
	I0510 16:55:21.667379  731104 node_ready.go:35] waiting up to 6m0s for node "addons-088134" to be "Ready" ...
	I0510 16:55:23.258680  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.609829618s)
	I0510 16:55:23.258727  731104 addons.go:479] Verifying addon ingress=true in "addons-088134"
	I0510 16:55:23.258810  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.512646003s)
	I0510 16:55:23.258870  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.393203066s)
	I0510 16:55:23.258946  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.314309627s)
	I0510 16:55:23.259038  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.209048133s)
	I0510 16:55:23.259116  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.204882771s)
	I0510 16:55:23.259142  731104 addons.go:479] Verifying addon registry=true in "addons-088134"
	I0510 16:55:23.260333  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.212307895s)
	I0510 16:55:23.260364  731104 addons.go:479] Verifying addon metrics-server=true in "addons-088134"
	I0510 16:55:23.260448  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.11051481s)
	I0510 16:55:23.260584  731104 out.go:177] * Verifying ingress addon...
	I0510 16:55:23.261537  731104 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-088134 service yakd-dashboard -n yakd-dashboard
	
	I0510 16:55:23.262860  731104 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0510 16:55:23.264216  731104 out.go:177] * Verifying registry addon...
	I0510 16:55:23.266629  731104 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0510 16:55:23.266941  731104 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0510 16:55:23.266962  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:23.361551  731104 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0510 16:55:23.361580  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0510 16:55:23.672644  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:23.849227  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:23.849519  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:23.861712  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.409766015s)
	W0510 16:55:23.861773  731104 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0510 16:55:23.861809  731104 retry.go:31] will retry after 180.682115ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0510 16:55:24.042702  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0510 16:55:24.147294  731104 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0510 16:55:24.147376  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:24.172824  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:24.266600  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:24.269049  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:24.347190  731104 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0510 16:55:24.371297  731104 addons.go:238] Setting addon gcp-auth=true in "addons-088134"
	I0510 16:55:24.371363  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:24.371962  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:24.391474  731104 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0510 16:55:24.391566  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:24.410020  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:24.456585  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.589093175s)
	I0510 16:55:24.456644  731104 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-088134"
	I0510 16:55:24.458366  731104 out.go:177] * Verifying csi-hostpath-driver addon...
	I0510 16:55:24.460532  731104 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0510 16:55:24.467705  731104 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0510 16:55:24.467733  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:24.766612  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:24.769401  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:24.963365  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:25.265827  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:25.268740  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:25.463497  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:25.766155  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:25.769173  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:25.964179  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:26.171009  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:26.266047  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:26.269140  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:26.464062  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:26.766957  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:26.769030  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:26.802556  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.759792574s)
	I0510 16:55:26.802596  731104 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.411064974s)
	I0510 16:55:26.804636  731104 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0510 16:55:26.805945  731104 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0510 16:55:26.807166  731104 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0510 16:55:26.807183  731104 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0510 16:55:26.824759  731104 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0510 16:55:26.824785  731104 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0510 16:55:26.841624  731104 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0510 16:55:26.841646  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0510 16:55:26.858490  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0510 16:55:26.963638  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:27.186856  731104 addons.go:479] Verifying addon gcp-auth=true in "addons-088134"
	I0510 16:55:27.188824  731104 out.go:177] * Verifying gcp-auth addon...
	I0510 16:55:27.190931  731104 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0510 16:55:27.192914  731104 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0510 16:55:27.192937  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:27.267266  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:27.268925  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:27.463855  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:27.694354  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:27.766098  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:27.769186  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:27.964119  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:28.194514  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:28.266593  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:28.269473  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:28.465037  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:28.670986  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:28.694823  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:28.766733  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:28.768809  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:28.963822  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:29.194234  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:29.266716  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:29.268942  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:29.463900  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:29.693784  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:29.766724  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:29.768851  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:29.963971  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:30.195156  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:30.296249  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:30.296387  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:30.464347  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:30.694574  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:30.766523  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:30.769634  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:30.963680  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:31.170470  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:31.194338  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:31.266491  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:31.269499  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:31.464458  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:31.693868  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:31.765983  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:31.769130  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:31.964501  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:32.194742  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:32.266818  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:32.268827  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:32.463970  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:32.694466  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:32.766193  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:32.769277  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:32.964262  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:33.171014  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:33.193723  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:33.266574  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:33.269571  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:33.463163  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:33.694007  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:33.765672  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:33.769711  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:33.963552  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:34.194134  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:34.266064  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:34.269030  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:34.464032  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:34.694753  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:34.766309  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:34.769211  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:34.965331  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:35.194029  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:35.266110  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:35.269107  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:35.464127  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:35.670826  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:35.694341  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:35.766409  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:35.769432  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:35.964137  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:36.194922  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:36.267029  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:36.268884  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:36.463869  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:36.694691  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:36.766631  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:36.768710  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:36.964099  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:37.194656  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:37.266442  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:37.269423  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:37.464258  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:37.671021  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:37.693809  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:37.766789  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:37.768883  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:37.963849  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:38.195067  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:38.297222  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:38.297477  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:38.463792  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:38.694689  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:38.766464  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:38.769598  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:38.963475  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:39.194090  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:39.266016  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:39.268991  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:39.463974  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:39.694465  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:39.766154  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:39.769220  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:39.964346  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:40.170198  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:40.194371  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:40.266512  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:40.269531  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:40.463576  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:40.694698  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:40.766546  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:40.769524  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:40.963360  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:41.194858  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:41.266612  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:41.269725  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:41.463522  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:41.694069  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:41.765825  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:41.769891  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:41.964400  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:42.170273  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:42.194296  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:42.266705  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:42.268877  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:42.464026  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:42.694583  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:42.766394  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:42.769577  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:42.963349  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:43.194099  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:43.265690  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:43.269811  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:43.463786  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:43.694457  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:43.766950  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:43.770137  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:43.963964  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:44.170818  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:44.194469  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:44.266619  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:44.269598  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:44.463466  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:44.694164  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:44.765779  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:44.769850  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:44.963551  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:45.193855  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:45.266653  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:45.268767  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:45.463691  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:45.694116  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:45.765712  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:45.769878  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:45.963863  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:46.170946  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:46.194677  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:46.266845  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:46.268824  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:46.463698  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:46.694739  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:46.766584  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:46.769589  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:46.963824  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:47.194543  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:47.266434  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:47.269184  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:47.464069  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:47.694826  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:47.766768  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:47.768836  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:47.963717  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:48.194473  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:48.266220  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:48.269241  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:48.464228  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:48.669888  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:48.694816  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:48.766527  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:48.769531  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:48.963649  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:49.193883  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:49.266951  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:49.269122  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:49.463899  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:49.694559  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:49.766693  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:49.768735  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:49.963661  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:50.194678  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:50.266509  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:50.269502  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:50.463508  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:50.670173  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:50.693995  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:50.765780  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:50.769898  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:50.963745  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:51.194232  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:51.266056  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:51.268830  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:51.463784  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:51.694282  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:51.766202  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:51.769218  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:51.963960  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:52.194736  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:52.266744  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:52.269101  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:52.464445  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:52.694111  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:52.765968  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:52.768888  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:52.963893  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:53.170713  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:53.194535  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:53.266508  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:53.269581  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:53.463491  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:53.694198  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:53.766076  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:53.769148  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:53.964050  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:54.193639  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:54.267258  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:54.269280  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:54.464534  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:54.694081  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:54.765842  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:54.769000  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:54.964129  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:55.170792  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:55.194575  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:55.266589  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:55.269558  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:55.463513  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:55.693587  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:55.766342  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:55.769476  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:55.964542  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:56.194746  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:56.266570  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:56.269750  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:56.463637  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:56.694178  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:56.765942  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:56.769168  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:56.964645  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:57.193851  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:57.266858  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:57.268998  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:57.463938  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:57.670903  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:57.694745  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:57.766496  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:57.769693  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:57.963595  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:58.194430  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:58.266468  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:58.269548  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:58.463467  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:58.694363  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:58.766345  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:58.769523  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:58.964567  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:59.194436  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:59.266591  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:59.269992  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:59.463845  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:59.694133  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:59.765932  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:59.768966  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:59.963971  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:56:00.170650  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:56:00.194379  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:00.266182  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:00.269074  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:00.464295  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:00.694680  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:00.794998  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:00.795245  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:00.964394  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:01.193898  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:01.267725  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:01.276517  731104 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0510 16:56:01.276547  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:01.464271  731104 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0510 16:56:01.464303  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:01.670506  731104 node_ready.go:49] node "addons-088134" is "Ready"
	I0510 16:56:01.670541  731104 node_ready.go:38] duration metric: took 40.003132925s for node "addons-088134" to be "Ready" ...
	I0510 16:56:01.670563  731104 api_server.go:52] waiting for apiserver process to appear ...
	I0510 16:56:01.670627  731104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 16:56:01.685377  731104 api_server.go:72] duration metric: took 44.858758077s to wait for apiserver process to appear ...
	I0510 16:56:01.685410  731104 api_server.go:88] waiting for apiserver healthz status ...
	I0510 16:56:01.685439  731104 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0510 16:56:01.691167  731104 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0510 16:56:01.692299  731104 api_server.go:141] control plane version: v1.33.0
	I0510 16:56:01.692331  731104 api_server.go:131] duration metric: took 6.91101ms to wait for apiserver health ...
	I0510 16:56:01.692345  731104 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 16:56:01.693473  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:01.696016  731104 system_pods.go:59] 19 kube-system pods found
	I0510 16:56:01.696055  731104 system_pods.go:61] "amd-gpu-device-plugin-wkh8g" [e739ed11-e98a-4e6d-9105-14c2c5463669] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0510 16:56:01.696067  731104 system_pods.go:61] "coredns-674b8bbfcf-n4msm" [0cb19c4f-40cd-4145-98c3-f1710d609272] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 16:56:01.696080  731104 system_pods.go:61] "csi-hostpath-attacher-0" [a26eced3-d492-41f0-9f43-f163252af7ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0510 16:56:01.696093  731104 system_pods.go:61] "csi-hostpath-resizer-0" [bbb9ed99-10a0-49cf-a4ff-c1ec27a30a5a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0510 16:56:01.696103  731104 system_pods.go:61] "csi-hostpathplugin-cbgm9" [5465e1cc-996f-4ede-a2cf-c3eaaa0b37de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0510 16:56:01.696114  731104 system_pods.go:61] "etcd-addons-088134" [d95aa406-9fc3-4735-80c5-f9f17cde659d] Running
	I0510 16:56:01.696123  731104 system_pods.go:61] "kindnet-9929f" [f012534c-b774-4c7c-8844-d37bddf2b6e4] Running
	I0510 16:56:01.696131  731104 system_pods.go:61] "kube-apiserver-addons-088134" [91981f1a-14b3-4e5a-99e6-9abc8900080e] Running
	I0510 16:56:01.696139  731104 system_pods.go:61] "kube-controller-manager-addons-088134" [417095d9-ac03-4918-bcb6-91996522918b] Running
	I0510 16:56:01.696151  731104 system_pods.go:61] "kube-ingress-dns-minikube" [2f978a66-7d99-44f4-a58a-d0df66466df0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0510 16:56:01.696160  731104 system_pods.go:61] "kube-proxy-rwb2j" [db4b4b5c-2ed3-46a1-82c6-d3c6bc3cbb94] Running
	I0510 16:56:01.696169  731104 system_pods.go:61] "kube-scheduler-addons-088134" [2ef52c7c-9ca2-447b-84be-d60312db1962] Running
	I0510 16:56:01.696177  731104 system_pods.go:61] "metrics-server-7fbb699795-mj6nz" [c09c57d8-2189-467d-ba7e-6e516538365f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 16:56:01.696191  731104 system_pods.go:61] "nvidia-device-plugin-daemonset-slbqt" [a926bab4-4e66-4c98-963e-f41f5ea1fa49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0510 16:56:01.696205  731104 system_pods.go:61] "registry-694bd45846-pjmvv" [f99f3f51-b2d2-444c-9172-a281336a69ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0510 16:56:01.696217  731104 system_pods.go:61] "registry-proxy-2hrkl" [f1926b87-dafd-49dc-a845-8f1b075517f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0510 16:56:01.696228  731104 system_pods.go:61] "snapshot-controller-68b874b76f-cxdtz" [1bbae0e1-c191-4e58-aea9-a94542984207] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:01.696239  731104 system_pods.go:61] "snapshot-controller-68b874b76f-qng99" [0c237785-f4a0-4f1c-a33e-1d6d99b09ca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:01.696250  731104 system_pods.go:61] "storage-provisioner" [d533b8b2-edf7-4e05-9fed-4c8c05a23f60] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0510 16:56:01.696259  731104 system_pods.go:74] duration metric: took 3.906115ms to wait for pod list to return data ...
	I0510 16:56:01.696272  731104 default_sa.go:34] waiting for default service account to be created ...
	I0510 16:56:01.756925  731104 default_sa.go:45] found service account: "default"
	I0510 16:56:01.756968  731104 default_sa.go:55] duration metric: took 60.684361ms for default service account to be created ...
	I0510 16:56:01.756981  731104 system_pods.go:116] waiting for k8s-apps to be running ...
	I0510 16:56:01.769236  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:01.769326  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:01.844937  731104 system_pods.go:86] 19 kube-system pods found
	I0510 16:56:01.845057  731104 system_pods.go:89] "amd-gpu-device-plugin-wkh8g" [e739ed11-e98a-4e6d-9105-14c2c5463669] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0510 16:56:01.845086  731104 system_pods.go:89] "coredns-674b8bbfcf-n4msm" [0cb19c4f-40cd-4145-98c3-f1710d609272] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 16:56:01.845134  731104 system_pods.go:89] "csi-hostpath-attacher-0" [a26eced3-d492-41f0-9f43-f163252af7ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0510 16:56:01.845155  731104 system_pods.go:89] "csi-hostpath-resizer-0" [bbb9ed99-10a0-49cf-a4ff-c1ec27a30a5a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0510 16:56:01.845173  731104 system_pods.go:89] "csi-hostpathplugin-cbgm9" [5465e1cc-996f-4ede-a2cf-c3eaaa0b37de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0510 16:56:01.845189  731104 system_pods.go:89] "etcd-addons-088134" [d95aa406-9fc3-4735-80c5-f9f17cde659d] Running
	I0510 16:56:01.845228  731104 system_pods.go:89] "kindnet-9929f" [f012534c-b774-4c7c-8844-d37bddf2b6e4] Running
	I0510 16:56:01.845239  731104 system_pods.go:89] "kube-apiserver-addons-088134" [91981f1a-14b3-4e5a-99e6-9abc8900080e] Running
	I0510 16:56:01.845245  731104 system_pods.go:89] "kube-controller-manager-addons-088134" [417095d9-ac03-4918-bcb6-91996522918b] Running
	I0510 16:56:01.845256  731104 system_pods.go:89] "kube-ingress-dns-minikube" [2f978a66-7d99-44f4-a58a-d0df66466df0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0510 16:56:01.845261  731104 system_pods.go:89] "kube-proxy-rwb2j" [db4b4b5c-2ed3-46a1-82c6-d3c6bc3cbb94] Running
	I0510 16:56:01.845266  731104 system_pods.go:89] "kube-scheduler-addons-088134" [2ef52c7c-9ca2-447b-84be-d60312db1962] Running
	I0510 16:56:01.845278  731104 system_pods.go:89] "metrics-server-7fbb699795-mj6nz" [c09c57d8-2189-467d-ba7e-6e516538365f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 16:56:01.845303  731104 system_pods.go:89] "nvidia-device-plugin-daemonset-slbqt" [a926bab4-4e66-4c98-963e-f41f5ea1fa49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0510 16:56:01.845316  731104 system_pods.go:89] "registry-694bd45846-pjmvv" [f99f3f51-b2d2-444c-9172-a281336a69ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0510 16:56:01.845324  731104 system_pods.go:89] "registry-proxy-2hrkl" [f1926b87-dafd-49dc-a845-8f1b075517f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0510 16:56:01.845354  731104 system_pods.go:89] "snapshot-controller-68b874b76f-cxdtz" [1bbae0e1-c191-4e58-aea9-a94542984207] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:01.845369  731104 system_pods.go:89] "snapshot-controller-68b874b76f-qng99" [0c237785-f4a0-4f1c-a33e-1d6d99b09ca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:01.845380  731104 system_pods.go:89] "storage-provisioner" [d533b8b2-edf7-4e05-9fed-4c8c05a23f60] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0510 16:56:01.845410  731104 retry.go:31] will retry after 273.218795ms: missing components: kube-dns
	I0510 16:56:01.964803  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:02.151901  731104 system_pods.go:86] 19 kube-system pods found
	I0510 16:56:02.151984  731104 system_pods.go:89] "amd-gpu-device-plugin-wkh8g" [e739ed11-e98a-4e6d-9105-14c2c5463669] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0510 16:56:02.151999  731104 system_pods.go:89] "coredns-674b8bbfcf-n4msm" [0cb19c4f-40cd-4145-98c3-f1710d609272] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 16:56:02.152080  731104 system_pods.go:89] "csi-hostpath-attacher-0" [a26eced3-d492-41f0-9f43-f163252af7ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0510 16:56:02.152126  731104 system_pods.go:89] "csi-hostpath-resizer-0" [bbb9ed99-10a0-49cf-a4ff-c1ec27a30a5a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0510 16:56:02.152152  731104 system_pods.go:89] "csi-hostpathplugin-cbgm9" [5465e1cc-996f-4ede-a2cf-c3eaaa0b37de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0510 16:56:02.152164  731104 system_pods.go:89] "etcd-addons-088134" [d95aa406-9fc3-4735-80c5-f9f17cde659d] Running
	I0510 16:56:02.152174  731104 system_pods.go:89] "kindnet-9929f" [f012534c-b774-4c7c-8844-d37bddf2b6e4] Running
	I0510 16:56:02.152180  731104 system_pods.go:89] "kube-apiserver-addons-088134" [91981f1a-14b3-4e5a-99e6-9abc8900080e] Running
	I0510 16:56:02.152186  731104 system_pods.go:89] "kube-controller-manager-addons-088134" [417095d9-ac03-4918-bcb6-91996522918b] Running
	I0510 16:56:02.152200  731104 system_pods.go:89] "kube-ingress-dns-minikube" [2f978a66-7d99-44f4-a58a-d0df66466df0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0510 16:56:02.152207  731104 system_pods.go:89] "kube-proxy-rwb2j" [db4b4b5c-2ed3-46a1-82c6-d3c6bc3cbb94] Running
	I0510 16:56:02.152215  731104 system_pods.go:89] "kube-scheduler-addons-088134" [2ef52c7c-9ca2-447b-84be-d60312db1962] Running
	I0510 16:56:02.152229  731104 system_pods.go:89] "metrics-server-7fbb699795-mj6nz" [c09c57d8-2189-467d-ba7e-6e516538365f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 16:56:02.152244  731104 system_pods.go:89] "nvidia-device-plugin-daemonset-slbqt" [a926bab4-4e66-4c98-963e-f41f5ea1fa49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0510 16:56:02.152254  731104 system_pods.go:89] "registry-694bd45846-pjmvv" [f99f3f51-b2d2-444c-9172-a281336a69ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0510 16:56:02.152270  731104 system_pods.go:89] "registry-proxy-2hrkl" [f1926b87-dafd-49dc-a845-8f1b075517f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0510 16:56:02.152284  731104 system_pods.go:89] "snapshot-controller-68b874b76f-cxdtz" [1bbae0e1-c191-4e58-aea9-a94542984207] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:02.152398  731104 system_pods.go:89] "snapshot-controller-68b874b76f-qng99" [0c237785-f4a0-4f1c-a33e-1d6d99b09ca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:02.152701  731104 system_pods.go:89] "storage-provisioner" [d533b8b2-edf7-4e05-9fed-4c8c05a23f60] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0510 16:56:02.152730  731104 retry.go:31] will retry after 326.769279ms: missing components: kube-dns
	I0510 16:56:02.248975  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:02.349933  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:02.350148  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:02.465121  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:02.566964  731104 system_pods.go:86] 19 kube-system pods found
	I0510 16:56:02.567001  731104 system_pods.go:89] "amd-gpu-device-plugin-wkh8g" [e739ed11-e98a-4e6d-9105-14c2c5463669] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0510 16:56:02.567006  731104 system_pods.go:89] "coredns-674b8bbfcf-n4msm" [0cb19c4f-40cd-4145-98c3-f1710d609272] Running
	I0510 16:56:02.567014  731104 system_pods.go:89] "csi-hostpath-attacher-0" [a26eced3-d492-41f0-9f43-f163252af7ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0510 16:56:02.567020  731104 system_pods.go:89] "csi-hostpath-resizer-0" [bbb9ed99-10a0-49cf-a4ff-c1ec27a30a5a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0510 16:56:02.567025  731104 system_pods.go:89] "csi-hostpathplugin-cbgm9" [5465e1cc-996f-4ede-a2cf-c3eaaa0b37de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0510 16:56:02.567030  731104 system_pods.go:89] "etcd-addons-088134" [d95aa406-9fc3-4735-80c5-f9f17cde659d] Running
	I0510 16:56:02.567034  731104 system_pods.go:89] "kindnet-9929f" [f012534c-b774-4c7c-8844-d37bddf2b6e4] Running
	I0510 16:56:02.567037  731104 system_pods.go:89] "kube-apiserver-addons-088134" [91981f1a-14b3-4e5a-99e6-9abc8900080e] Running
	I0510 16:56:02.567042  731104 system_pods.go:89] "kube-controller-manager-addons-088134" [417095d9-ac03-4918-bcb6-91996522918b] Running
	I0510 16:56:02.567047  731104 system_pods.go:89] "kube-ingress-dns-minikube" [2f978a66-7d99-44f4-a58a-d0df66466df0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0510 16:56:02.567050  731104 system_pods.go:89] "kube-proxy-rwb2j" [db4b4b5c-2ed3-46a1-82c6-d3c6bc3cbb94] Running
	I0510 16:56:02.567053  731104 system_pods.go:89] "kube-scheduler-addons-088134" [2ef52c7c-9ca2-447b-84be-d60312db1962] Running
	I0510 16:56:02.567058  731104 system_pods.go:89] "metrics-server-7fbb699795-mj6nz" [c09c57d8-2189-467d-ba7e-6e516538365f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 16:56:02.567063  731104 system_pods.go:89] "nvidia-device-plugin-daemonset-slbqt" [a926bab4-4e66-4c98-963e-f41f5ea1fa49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0510 16:56:02.567072  731104 system_pods.go:89] "registry-694bd45846-pjmvv" [f99f3f51-b2d2-444c-9172-a281336a69ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0510 16:56:02.567079  731104 system_pods.go:89] "registry-proxy-2hrkl" [f1926b87-dafd-49dc-a845-8f1b075517f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0510 16:56:02.567087  731104 system_pods.go:89] "snapshot-controller-68b874b76f-cxdtz" [1bbae0e1-c191-4e58-aea9-a94542984207] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:02.567094  731104 system_pods.go:89] "snapshot-controller-68b874b76f-qng99" [0c237785-f4a0-4f1c-a33e-1d6d99b09ca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:02.567098  731104 system_pods.go:89] "storage-provisioner" [d533b8b2-edf7-4e05-9fed-4c8c05a23f60] Running
	I0510 16:56:02.567106  731104 system_pods.go:126] duration metric: took 810.119278ms to wait for k8s-apps to be running ...
	I0510 16:56:02.567116  731104 system_svc.go:44] waiting for kubelet service to be running ....
	I0510 16:56:02.567160  731104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 16:56:02.579185  731104 system_svc.go:56] duration metric: took 12.058834ms WaitForService to wait for kubelet
	I0510 16:56:02.579217  731104 kubeadm.go:578] duration metric: took 45.752605221s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 16:56:02.579244  731104 node_conditions.go:102] verifying NodePressure condition ...
	I0510 16:56:02.582180  731104 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0510 16:56:02.582208  731104 node_conditions.go:123] node cpu capacity is 8
	I0510 16:56:02.582228  731104 node_conditions.go:105] duration metric: took 2.977345ms to run NodePressure ...
	I0510 16:56:02.582244  731104 start.go:241] waiting for startup goroutines ...
	I0510 16:56:02.694366  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:02.766751  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:02.769346  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:02.965109  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:03.195366  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:03.266763  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:03.269592  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:03.464264  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:03.694961  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:03.765930  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:03.769117  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:03.964961  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:04.195175  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:04.266330  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:04.269395  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:04.465061  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:04.694032  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:04.765933  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:04.769169  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:04.964383  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:05.194964  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:05.295801  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:05.295845  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:05.464489  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:05.694946  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:05.767019  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:05.769298  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:05.965099  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:06.195273  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:06.266039  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:06.269049  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:06.465423  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:06.694877  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:06.767133  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:06.769150  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:06.964845  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:07.195010  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:07.267820  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:07.269737  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:07.464279  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:07.694287  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:07.766381  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:07.769626  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:07.963775  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:08.195655  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:08.267303  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:08.269249  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:08.464863  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:08.694177  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:08.766332  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:08.769465  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:08.964433  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:09.195055  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:09.266064  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:09.269056  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:09.464413  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:09.694620  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:09.766991  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:09.769191  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:09.964392  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:10.245115  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:10.266515  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:10.269611  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:10.464420  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:10.744977  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:10.767356  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:10.769511  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:10.964169  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:11.194112  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:11.266149  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:11.269375  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:11.464699  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:11.694523  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:11.766829  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:11.769199  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:11.964736  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:12.194905  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:12.266745  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:12.268966  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:12.464259  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:12.694680  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:12.795307  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:12.795307  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:12.964544  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:13.194326  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:13.267186  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:13.269657  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:13.464879  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:13.746420  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:13.767049  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:13.769720  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:13.963963  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:14.244984  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:14.266376  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:14.269969  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:14.464519  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:14.745044  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:14.846274  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:14.846325  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:14.964441  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:15.194487  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:15.266734  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:15.269489  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:15.465241  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:15.694084  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:15.765943  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:15.769020  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:15.964540  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:16.194214  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:16.266512  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:16.269543  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:16.465097  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:16.694576  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:16.766715  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:16.769618  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:16.964410  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:17.194369  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:17.266690  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:17.269416  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:17.464772  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:17.694704  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:17.766652  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:17.769114  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:17.964481  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:18.194495  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:18.266717  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:18.269436  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:18.467059  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:18.694635  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:18.766922  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:18.769004  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:18.964842  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:19.195111  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:19.265921  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:19.268920  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:19.464154  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:19.694588  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:19.766977  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:19.769287  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:19.964728  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:20.245504  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:20.266798  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:20.269809  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:20.464457  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:20.694910  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:20.767152  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:20.769128  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:20.964457  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:21.194929  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:21.265911  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:21.269182  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:21.464733  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:21.695136  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:21.766348  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:21.769494  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:21.963884  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:22.193953  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:22.267385  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:22.269340  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:22.464640  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:22.694787  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:22.795435  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:22.795483  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:22.964640  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:23.194333  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:23.266341  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:23.269682  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:23.463903  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:23.748196  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:23.766087  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:23.769421  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:23.964487  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:24.194829  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:24.296013  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:24.296058  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:24.464049  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:24.694052  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:24.766060  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:24.769169  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:24.964795  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:25.195045  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:25.266152  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:25.269386  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:25.464707  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:25.694687  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:25.766650  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:25.768879  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:25.964098  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:26.193892  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:26.266879  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:26.268869  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:26.464110  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:26.694378  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:26.766407  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:26.769323  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:26.963686  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:27.194365  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:27.266140  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:27.269147  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:27.464289  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:27.694392  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:27.766381  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:27.769443  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:27.964815  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:28.247187  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:28.266077  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:28.269334  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:28.469106  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:28.747677  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:28.767327  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:28.846515  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:28.965034  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:29.259026  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:29.348089  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:29.348394  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:29.463968  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:29.746572  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:29.766569  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:29.770297  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:29.963734  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:30.246347  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:30.266179  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:30.269547  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:30.463796  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:30.695529  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:30.766460  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:30.769574  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:30.963642  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:31.194585  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:31.348253  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:31.348880  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:31.464424  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:31.694218  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:31.765820  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:31.770004  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:31.964307  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:32.194171  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:32.266495  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:32.269819  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:32.464217  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:32.694406  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:32.766422  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:32.769796  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:32.963927  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:33.195522  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:33.266199  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:33.269381  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:33.464649  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:33.695524  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:33.766907  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:33.769410  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:33.964769  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:34.194698  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:34.266513  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:34.269482  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:34.464921  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:34.695156  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:34.766542  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:34.769954  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:34.964356  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:35.196521  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:35.266335  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:35.269453  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:35.464987  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:35.695579  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:35.766056  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:35.769301  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:35.965342  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:36.194343  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:36.266603  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:36.269573  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:36.463834  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:36.695399  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:36.766424  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:36.769675  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:36.964359  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:37.194683  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:37.266533  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:37.269445  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:37.464547  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:37.694166  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:37.766286  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:37.769222  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:37.964782  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:38.194805  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:38.267461  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:38.269163  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:38.464635  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:38.694762  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:38.766823  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:38.769372  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:38.964616  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:39.195043  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:39.295445  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:39.295561  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:39.464668  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:39.695236  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:39.766373  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:39.769643  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:39.964155  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:40.194354  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:40.266770  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:40.269278  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:40.464455  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:40.694554  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:40.766893  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:40.769046  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:40.964446  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:41.194225  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:41.266789  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:41.269319  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:41.464916  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:41.694680  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:41.795736  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:41.795847  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:41.964873  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:42.195010  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:42.266238  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:42.269451  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:42.464774  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:42.695383  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:42.796591  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:42.796707  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:42.963699  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:43.195244  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:43.295642  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:43.295652  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:43.465104  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:43.746724  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:43.844785  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:43.862457  731104 kapi.go:107] duration metric: took 1m20.595827968s to wait for kubernetes.io/minikube-addons=registry ...
	I0510 16:56:43.965692  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:44.247155  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:44.267390  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:44.463942  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:44.747660  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:44.849518  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:44.964125  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:45.246403  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:45.267191  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:45.464525  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:45.694657  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:45.766710  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:45.964558  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:46.194428  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:46.266779  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:46.463562  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:46.711567  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:46.766741  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:46.963852  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:47.194869  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:47.266931  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:47.464060  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:47.693641  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:47.766752  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:47.963862  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:48.194937  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:48.265982  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:48.464654  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:48.694893  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:48.779960  731104 kapi.go:107] duration metric: took 1m25.517093917s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0510 16:56:48.964647  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:49.194899  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:49.464092  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:49.694057  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:49.964566  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:50.194367  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:50.465251  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:50.694520  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:50.965523  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:51.245793  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:51.464577  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:51.694654  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:51.964638  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:52.194399  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:52.464862  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:52.695398  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:52.965487  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:53.194713  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:53.464902  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:53.694531  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:53.964338  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:54.193912  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:54.464472  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:54.694004  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:54.970052  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:55.194435  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:55.465885  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:55.745946  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:55.964458  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:56.195017  731104 kapi.go:107] duration metric: took 1m29.004082213s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0510 16:56:56.197052  731104 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-088134 cluster.
	I0510 16:56:56.199340  731104 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0510 16:56:56.200506  731104 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0510 16:56:56.464448  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:56.964413  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:57.464331  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:57.963995  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:58.464735  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:58.964374  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:59.463633  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:59.964413  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:57:00.464019  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:57:00.965683  731104 kapi.go:107] duration metric: took 1m36.505152543s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0510 16:57:00.967242  731104 out.go:177] * Enabled addons: storage-provisioner, amd-gpu-device-plugin, ingress-dns, default-storageclass, nvidia-device-plugin, cloud-spanner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0510 16:57:00.968556  731104 addons.go:514] duration metric: took 1m44.142027482s for enable addons: enabled=[storage-provisioner amd-gpu-device-plugin ingress-dns default-storageclass nvidia-device-plugin cloud-spanner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0510 16:57:00.968601  731104 start.go:246] waiting for cluster config update ...
	I0510 16:57:00.968642  731104 start.go:255] writing updated cluster config ...
	I0510 16:57:00.968957  731104 ssh_runner.go:195] Run: rm -f paused
	I0510 16:57:00.972751  731104 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 16:57:00.975926  731104 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-n4msm" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:00.979996  731104 pod_ready.go:94] pod "coredns-674b8bbfcf-n4msm" is "Ready"
	I0510 16:57:00.980019  731104 pod_ready.go:86] duration metric: took 4.069989ms for pod "coredns-674b8bbfcf-n4msm" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:00.982012  731104 pod_ready.go:83] waiting for pod "etcd-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:00.985586  731104 pod_ready.go:94] pod "etcd-addons-088134" is "Ready"
	I0510 16:57:00.985604  731104 pod_ready.go:86] duration metric: took 3.570305ms for pod "etcd-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:00.987493  731104 pod_ready.go:83] waiting for pod "kube-apiserver-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:00.990926  731104 pod_ready.go:94] pod "kube-apiserver-addons-088134" is "Ready"
	I0510 16:57:00.990942  731104 pod_ready.go:86] duration metric: took 3.430544ms for pod "kube-apiserver-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:00.992702  731104 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:01.376499  731104 pod_ready.go:94] pod "kube-controller-manager-addons-088134" is "Ready"
	I0510 16:57:01.376540  731104 pod_ready.go:86] duration metric: took 383.816874ms for pod "kube-controller-manager-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:01.576468  731104 pod_ready.go:83] waiting for pod "kube-proxy-rwb2j" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:01.977103  731104 pod_ready.go:94] pod "kube-proxy-rwb2j" is "Ready"
	I0510 16:57:01.977131  731104 pod_ready.go:86] duration metric: took 400.634309ms for pod "kube-proxy-rwb2j" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:02.177677  731104 pod_ready.go:83] waiting for pod "kube-scheduler-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:02.577246  731104 pod_ready.go:94] pod "kube-scheduler-addons-088134" is "Ready"
	I0510 16:57:02.577278  731104 pod_ready.go:86] duration metric: took 399.57116ms for pod "kube-scheduler-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:02.577289  731104 pod_ready.go:40] duration metric: took 1.604503102s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 16:57:02.623090  731104 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 16:57:02.625834  731104 out.go:177] * Done! kubectl is now configured to use "addons-088134" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 10 17:04:11 addons-088134 crio[1051]: time="2025-05-10 17:04:11.420722290Z" level=info msg="Removed pod sandbox: 1f44f62fc25891145b8dadc87edb26bee2a7f75955766ef3ce343df7685ba03f" id=96c1610d-03db-4a86-9d78-000690dc29d5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	May 10 17:04:11 addons-088134 crio[1051]: time="2025-05-10 17:04:11.421249809Z" level=info msg="Stopping pod sandbox: f7f1cf6ea47987b7c6f2874530f4906a3dcd553dd1e494f1684f26562f9b0d02" id=81744821-c1d4-4c3c-b5b8-1d98b9874d26 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 10 17:04:11 addons-088134 crio[1051]: time="2025-05-10 17:04:11.421286134Z" level=info msg="Stopped pod sandbox (already stopped): f7f1cf6ea47987b7c6f2874530f4906a3dcd553dd1e494f1684f26562f9b0d02" id=81744821-c1d4-4c3c-b5b8-1d98b9874d26 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 10 17:04:11 addons-088134 crio[1051]: time="2025-05-10 17:04:11.421547092Z" level=info msg="Removing pod sandbox: f7f1cf6ea47987b7c6f2874530f4906a3dcd553dd1e494f1684f26562f9b0d02" id=c6758960-cf64-41f5-9013-58051e079919 name=/runtime.v1.RuntimeService/RemovePodSandbox
	May 10 17:04:11 addons-088134 crio[1051]: time="2025-05-10 17:04:11.428671301Z" level=info msg="Removed pod sandbox: f7f1cf6ea47987b7c6f2874530f4906a3dcd553dd1e494f1684f26562f9b0d02" id=c6758960-cf64-41f5-9013-58051e079919 name=/runtime.v1.RuntimeService/RemovePodSandbox
	May 10 17:04:11 addons-088134 crio[1051]: time="2025-05-10 17:04:11.429018249Z" level=info msg="Stopping pod sandbox: 057ccd9a42e9d57ab92b3ae24e908c753683c83e6c6806d39d144df7f44f76d8" id=13f90201-e2c5-41bc-8343-f70dd5a783ee name=/runtime.v1.RuntimeService/StopPodSandbox
	May 10 17:04:11 addons-088134 crio[1051]: time="2025-05-10 17:04:11.429056566Z" level=info msg="Stopped pod sandbox (already stopped): 057ccd9a42e9d57ab92b3ae24e908c753683c83e6c6806d39d144df7f44f76d8" id=13f90201-e2c5-41bc-8343-f70dd5a783ee name=/runtime.v1.RuntimeService/StopPodSandbox
	May 10 17:04:11 addons-088134 crio[1051]: time="2025-05-10 17:04:11.429277808Z" level=info msg="Removing pod sandbox: 057ccd9a42e9d57ab92b3ae24e908c753683c83e6c6806d39d144df7f44f76d8" id=b88309d9-b388-457b-9241-3a558f7f5e3b name=/runtime.v1.RuntimeService/RemovePodSandbox
	May 10 17:04:11 addons-088134 crio[1051]: time="2025-05-10 17:04:11.436367905Z" level=info msg="Removed pod sandbox: 057ccd9a42e9d57ab92b3ae24e908c753683c83e6c6806d39d144df7f44f76d8" id=b88309d9-b388-457b-9241-3a558f7f5e3b name=/runtime.v1.RuntimeService/RemovePodSandbox
	May 10 17:04:11 addons-088134 crio[1051]: time="2025-05-10 17:04:11.436768029Z" level=info msg="Stopping pod sandbox: 7a2f1abc2c7445c495d2e9502821f07d5ab7cf928dc52924a96b0905dea121f1" id=355321e9-c05d-4d1f-9256-463f8370b60c name=/runtime.v1.RuntimeService/StopPodSandbox
	May 10 17:04:11 addons-088134 crio[1051]: time="2025-05-10 17:04:11.436801615Z" level=info msg="Stopped pod sandbox (already stopped): 7a2f1abc2c7445c495d2e9502821f07d5ab7cf928dc52924a96b0905dea121f1" id=355321e9-c05d-4d1f-9256-463f8370b60c name=/runtime.v1.RuntimeService/StopPodSandbox
	May 10 17:04:11 addons-088134 crio[1051]: time="2025-05-10 17:04:11.437083098Z" level=info msg="Removing pod sandbox: 7a2f1abc2c7445c495d2e9502821f07d5ab7cf928dc52924a96b0905dea121f1" id=fce0d407-4fe2-4b0d-acf9-27052ebd0417 name=/runtime.v1.RuntimeService/RemovePodSandbox
	May 10 17:04:11 addons-088134 crio[1051]: time="2025-05-10 17:04:11.442491741Z" level=info msg="Removed pod sandbox: 7a2f1abc2c7445c495d2e9502821f07d5ab7cf928dc52924a96b0905dea121f1" id=fce0d407-4fe2-4b0d-acf9-27052ebd0417 name=/runtime.v1.RuntimeService/RemovePodSandbox
	May 10 17:04:27 addons-088134 crio[1051]: time="2025-05-10 17:04:27.247399530Z" level=info msg="Pulling image: docker.io/nginx:latest" id=73866860-c995-4794-b27b-c2e111796aef name=/runtime.v1.ImageService/PullImage
	May 10 17:04:27 addons-088134 crio[1051]: time="2025-05-10 17:04:27.248713825Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	May 10 17:04:33 addons-088134 crio[1051]: time="2025-05-10 17:04:33.246437511Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=d5421987-b3a4-425d-a7df-6e4482bf6c36 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:04:33 addons-088134 crio[1051]: time="2025-05-10 17:04:33.246726897Z" level=info msg="Image docker.io/nginx:alpine not found" id=d5421987-b3a4-425d-a7df-6e4482bf6c36 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:04:45 addons-088134 crio[1051]: time="2025-05-10 17:04:45.246870827Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=384a0f18-021c-4451-8800-7d2e1c9f5196 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:04:45 addons-088134 crio[1051]: time="2025-05-10 17:04:45.247188050Z" level=info msg="Image docker.io/nginx:alpine not found" id=384a0f18-021c-4451-8800-7d2e1c9f5196 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:04:58 addons-088134 crio[1051]: time="2025-05-10 17:04:58.247372614Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=733071dd-5918-48a0-9409-653f53a66c80 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:04:58 addons-088134 crio[1051]: time="2025-05-10 17:04:58.247845144Z" level=info msg="Image docker.io/nginx:alpine not found" id=733071dd-5918-48a0-9409-653f53a66c80 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:05:13 addons-088134 crio[1051]: time="2025-05-10 17:05:13.246712932Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=8717345e-208e-49a8-9bee-463a8ba1ee2d name=/runtime.v1.ImageService/ImageStatus
	May 10 17:05:13 addons-088134 crio[1051]: time="2025-05-10 17:05:13.247014605Z" level=info msg="Image docker.io/nginx:alpine not found" id=8717345e-208e-49a8-9bee-463a8ba1ee2d name=/runtime.v1.ImageService/ImageStatus
	May 10 17:05:27 addons-088134 crio[1051]: time="2025-05-10 17:05:27.247057564Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7075d75d-bffa-4a13-9650-dc9d434d86b6 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:05:27 addons-088134 crio[1051]: time="2025-05-10 17:05:27.247511320Z" level=info msg="Image docker.io/nginx:alpine not found" id=7075d75d-bffa-4a13-9650-dc9d434d86b6 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0289e5d7dcfc0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          8 minutes ago       Running             busybox                   0                   6ecaa28cfa8c9       busybox
	e16557a7639fa       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             8 minutes ago       Running             controller                0                   3c5807388c814       ingress-nginx-controller-7c9f76cd49-qbgd8
	52108da465b3c       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             9 minutes ago       Running             minikube-ingress-dns      0                   a54134a808151       kube-ingress-dns-minikube
	cdde87053fb27       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   9 minutes ago       Exited              patch                     0                   adff6bd9586e7       ingress-nginx-admission-patch-js6jf
	76eddab5e9c9f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   9 minutes ago       Exited              create                    0                   cbbbf9bcddde9       ingress-nginx-admission-create-f952k
	6f3083ad618b6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             9 minutes ago       Running             storage-provisioner       0                   d21551c2870ff       storage-provisioner
	e812c145b81cf       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                                             9 minutes ago       Running             coredns                   0                   81d12b3b0a2f1       coredns-674b8bbfcf-n4msm
	b70a60379aeea       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f                                                             10 minutes ago      Running             kindnet-cni               0                   fbd9b8064a4da       kindnet-9929f
	b9b40eeed72ce       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68                                                             10 minutes ago      Running             kube-proxy                0                   d136a9352b030       kube-proxy-rwb2j
	b5770c4e2c673       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4                                                             10 minutes ago      Running             kube-apiserver            0                   8c9f3e576d76d       kube-apiserver-addons-088134
	e353710533230       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4                                                             10 minutes ago      Running             kube-scheduler            0                   370464a525463       kube-scheduler-addons-088134
	7ea1e306698b2       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02                                                             10 minutes ago      Running             kube-controller-manager   0                   08327187c3b18       kube-controller-manager-addons-088134
	1e78801fed908       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                                             10 minutes ago      Running             etcd                      0                   2a1843433686f       etcd-addons-088134
	
	
	==> coredns [e812c145b81cf9e9d4792e1c5dfc6a18881e0c38667fed9f37ea51d6155447b6] <==
	[INFO] 10.244.0.19:41430 - 60591 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000162704s
	[INFO] 10.244.0.19:39041 - 27741 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004008951s
	[INFO] 10.244.0.19:39041 - 28058 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005266612s
	[INFO] 10.244.0.19:48593 - 6945 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005297496s
	[INFO] 10.244.0.19:48593 - 7206 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005444124s
	[INFO] 10.244.0.19:46297 - 42903 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004088541s
	[INFO] 10.244.0.19:46297 - 42616 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.007767338s
	[INFO] 10.244.0.19:36799 - 1231 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000124063s
	[INFO] 10.244.0.19:36799 - 1514 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000178124s
	[INFO] 10.244.0.22:38130 - 28750 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000272398s
	[INFO] 10.244.0.22:49886 - 59874 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000377322s
	[INFO] 10.244.0.22:33015 - 1597 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000485937s
	[INFO] 10.244.0.22:43790 - 48974 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123618s
	[INFO] 10.244.0.22:50579 - 38783 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113828s
	[INFO] 10.244.0.22:43801 - 50419 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000397572s
	[INFO] 10.244.0.22:39106 - 57274 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.006031932s
	[INFO] 10.244.0.22:48132 - 36422 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.006927122s
	[INFO] 10.244.0.22:43550 - 23549 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007352277s
	[INFO] 10.244.0.22:51021 - 31565 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007545825s
	[INFO] 10.244.0.22:52551 - 19093 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006624193s
	[INFO] 10.244.0.22:32823 - 55046 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.016421236s
	[INFO] 10.244.0.22:59663 - 14459 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002024561s
	[INFO] 10.244.0.22:58147 - 56118 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002083574s
	[INFO] 10.244.0.25:53567 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000195327s
	[INFO] 10.244.0.25:49086 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157644s
	
	
	==> describe nodes <==
	Name:               addons-088134
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-088134
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=addons-088134
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T16_55_12_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-088134
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 16:55:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-088134
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 17:05:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 17:02:50 +0000   Sat, 10 May 2025 16:55:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 17:02:50 +0000   Sat, 10 May 2025 16:55:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 17:02:50 +0000   Sat, 10 May 2025 16:55:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 17:02:50 +0000   Sat, 10 May 2025 16:56:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-088134
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20500d91395d44e28235d4dd9b851800
	  System UUID:                b82e7783-6ef2-4a0a-9063-340ec333f400
	  Boot ID:                    cf43504f-fb83-4d4b-9ff6-27d975437043
	  Kernel Version:             5.15.0-1081-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m36s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  ingress-nginx               ingress-nginx-controller-7c9f76cd49-qbgd8    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         10m
	  kube-system                 coredns-674b8bbfcf-n4msm                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-addons-088134                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-9929f                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-addons-088134                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-088134        200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-rwb2j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-088134                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-088134 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-088134 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node addons-088134 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m                kubelet          Node addons-088134 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                kubelet          Node addons-088134 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                kubelet          Node addons-088134 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node addons-088134 event: Registered Node addons-088134 in Controller
	  Normal   NodeReady                9m38s              kubelet          Node addons-088134 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +1.002546] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000006] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.003990] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000004] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000000] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +2.011769] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000002] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000003] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000004] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +4.063544] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000009] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000010] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000006] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.003973] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000005] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +8.191083] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000005] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000000] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000001] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	
	
	==> etcd [1e78801fed90811e8a35c870937504b5146bf46e93829751abb0bd47821c3fde] <==
	{"level":"info","ts":"2025-05-10T16:55:18.655716Z","caller":"traceutil/trace.go:171","msg":"trace[1793038919] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"109.985081ms","start":"2025-05-10T16:55:18.545691Z","end":"2025-05-10T16:55:18.655677Z","steps":["trace[1793038919] 'process raft request'  (duration: 17.506981ms)","trace[1793038919] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/serviceaccounts/kube-system/disruption-controller; req_size:202; } (duration: 85.2594ms)"],"step_count":2}
	{"level":"info","ts":"2025-05-10T16:55:19.653950Z","caller":"traceutil/trace.go:171","msg":"trace[1974616962] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"292.371235ms","start":"2025-05-10T16:55:19.361554Z","end":"2025-05-10T16:55:19.653925Z","steps":["trace[1974616962] 'process raft request'  (duration: 197.541808ms)","trace[1974616962] 'compare'  (duration: 93.069113ms)"],"step_count":2}
	{"level":"info","ts":"2025-05-10T16:55:19.950972Z","caller":"traceutil/trace.go:171","msg":"trace[1679853814] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"102.173802ms","start":"2025-05-10T16:55:19.848781Z","end":"2025-05-10T16:55:19.950955Z","steps":["trace[1679853814] 'process raft request'  (duration: 102.003177ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T16:55:20.164634Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.737288ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T16:55:20.164832Z","caller":"traceutil/trace.go:171","msg":"trace[999111871] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:428; }","duration":"109.974543ms","start":"2025-05-10T16:55:20.054837Z","end":"2025-05-10T16:55:20.164811Z","steps":["trace[999111871] 'agreement among raft nodes before linearized reading'  (duration: 109.714427ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.169270Z","caller":"traceutil/trace.go:171","msg":"trace[780922946] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"101.070958ms","start":"2025-05-10T16:55:20.068183Z","end":"2025-05-10T16:55:20.169254Z","steps":["trace[780922946] 'process raft request'  (duration: 85.021495ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.463412Z","caller":"traceutil/trace.go:171","msg":"trace[1196043989] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"100.088014ms","start":"2025-05-10T16:55:20.363307Z","end":"2025-05-10T16:55:20.463395Z","steps":["trace[1196043989] 'process raft request'  (duration: 100.010338ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.556806Z","caller":"traceutil/trace.go:171","msg":"trace[99842361] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"192.43927ms","start":"2025-05-10T16:55:20.364343Z","end":"2025-05-10T16:55:20.556782Z","steps":["trace[99842361] 'process raft request'  (duration: 191.773888ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.557005Z","caller":"traceutil/trace.go:171","msg":"trace[2105763833] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"190.924065ms","start":"2025-05-10T16:55:20.366068Z","end":"2025-05-10T16:55:20.556992Z","steps":["trace[2105763833] 'process raft request'  (duration: 190.222065ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.560703Z","caller":"traceutil/trace.go:171","msg":"trace[1451780068] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"192.111428ms","start":"2025-05-10T16:55:20.368577Z","end":"2025-05-10T16:55:20.560689Z","steps":["trace[1451780068] 'process raft request'  (duration: 187.785924ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.560942Z","caller":"traceutil/trace.go:171","msg":"trace[479795440] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"191.72829ms","start":"2025-05-10T16:55:20.368505Z","end":"2025-05-10T16:55:20.560234Z","steps":["trace[479795440] 'process raft request'  (duration: 187.830438ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T16:55:20.765194Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.238789ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-05-10T16:55:20.765361Z","caller":"traceutil/trace.go:171","msg":"trace[754062756] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:459; }","duration":"104.431846ms","start":"2025-05-10T16:55:20.660912Z","end":"2025-05-10T16:55:20.765344Z","steps":["trace[754062756] 'agreement among raft nodes before linearized reading'  (duration: 104.193814ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T16:55:20.766161Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.177977ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T16:55:20.766204Z","caller":"traceutil/trace.go:171","msg":"trace[563745967] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:459; }","duration":"105.24304ms","start":"2025-05-10T16:55:20.660951Z","end":"2025-05-10T16:55:20.766194Z","steps":["trace[563745967] 'agreement among raft nodes before linearized reading'  (duration: 105.179473ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.766879Z","caller":"traceutil/trace.go:171","msg":"trace[1544329589] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"103.395997ms","start":"2025-05-10T16:55:20.663473Z","end":"2025-05-10T16:55:20.766869Z","steps":["trace[1544329589] 'process raft request'  (duration: 100.915053ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.767214Z","caller":"traceutil/trace.go:171","msg":"trace[716921846] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"101.480542ms","start":"2025-05-10T16:55:20.665724Z","end":"2025-05-10T16:55:20.767205Z","steps":["trace[716921846] 'process raft request'  (duration: 98.846083ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.767462Z","caller":"traceutil/trace.go:171","msg":"trace[1107158292] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"103.812816ms","start":"2025-05-10T16:55:20.663635Z","end":"2025-05-10T16:55:20.767448Z","steps":["trace[1107158292] 'process raft request'  (duration: 100.893861ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.952550Z","caller":"traceutil/trace.go:171","msg":"trace[168546519] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"187.152437ms","start":"2025-05-10T16:55:20.765381Z","end":"2025-05-10T16:55:20.952533Z","steps":["trace[168546519] 'process raft request'  (duration: 179.936197ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.953444Z","caller":"traceutil/trace.go:171","msg":"trace[1318291614] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"186.597479ms","start":"2025-05-10T16:55:20.766831Z","end":"2025-05-10T16:55:20.953429Z","steps":["trace[1318291614] 'process raft request'  (duration: 186.204936ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T16:57:20.946308Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.526721ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128037157923451789 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/ipaddresses/10.101.189.46\" mod_revision:0 > success:<request_put:<key:\"/registry/ipaddresses/10.101.189.46\" value_size:540 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-05-10T16:57:20.946431Z","caller":"traceutil/trace.go:171","msg":"trace[1761554193] transaction","detail":"{read_only:false; response_revision:1378; number_of_response:1; }","duration":"176.149156ms","start":"2025-05-10T16:57:20.770256Z","end":"2025-05-10T16:57:20.946405Z","steps":["trace[1761554193] 'process raft request'  (duration: 52.079278ms)","trace[1761554193] 'compare'  (duration: 123.382791ms)"],"step_count":2}
	{"level":"info","ts":"2025-05-10T17:05:07.372604Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1944}
	{"level":"info","ts":"2025-05-10T17:05:07.396837Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":1944,"took":"23.598268ms","hash":53648152,"current-db-size-bytes":8368128,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":5337088,"current-db-size-in-use":"5.3 MB"}
	{"level":"info","ts":"2025-05-10T17:05:07.396895Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":53648152,"revision":1944,"compact-revision":-1}
	
	
	==> kernel <==
	 17:05:39 up  2:48,  0 users,  load average: 0.04, 9.36, 52.06
	Linux addons-088134 5.15.0-1081-gcp #90~20.04.1-Ubuntu SMP Fri Apr 4 18:55:17 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [b70a60379aeeacf04436fa8980dcb6ab38da85f87f9f253453027f66640e2581] <==
	I0510 17:03:30.851554       1 main.go:301] handling current node
	I0510 17:03:40.844537       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:03:40.844922       1 main.go:301] handling current node
	I0510 17:03:50.851510       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:03:50.851549       1 main.go:301] handling current node
	I0510 17:04:00.845242       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:04:00.845279       1 main.go:301] handling current node
	I0510 17:04:10.852998       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:04:10.853050       1 main.go:301] handling current node
	I0510 17:04:20.844737       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:04:20.844776       1 main.go:301] handling current node
	I0510 17:04:30.851537       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:04:30.851585       1 main.go:301] handling current node
	I0510 17:04:40.847505       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:04:40.847553       1 main.go:301] handling current node
	I0510 17:04:50.847499       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:04:50.847537       1 main.go:301] handling current node
	I0510 17:05:00.851506       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:05:00.851547       1 main.go:301] handling current node
	I0510 17:05:10.845326       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:05:10.845367       1 main.go:301] handling current node
	I0510 17:05:20.845411       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:05:20.845449       1 main.go:301] handling current node
	I0510 17:05:30.850541       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:05:30.850579       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b5770c4e2c67394cdfbeabd79aa8d3a4ab1a86ae2dc7c65e9a22daa83002e410] <==
	I0510 16:57:32.514968       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 16:57:33.713219       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 16:57:38.166981       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0510 16:57:38.349640       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.38.46"}
	I0510 16:57:38.353544       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 16:57:43.146215       1 handler.go:288] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0510 16:57:43.761653       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	W0510 16:57:44.163987       1 cacher.go:183] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0510 16:57:53.822344       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E0510 16:57:57.592768       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0510 17:03:53.288564       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:03:53.288625       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:03:53.354900       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:03:53.354943       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:03:53.362946       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:03:53.363081       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:03:53.376131       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:03:53.376175       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:03:53.454176       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0510 17:03:53.454214       1 handler.go:288] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0510 17:03:54.297498       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	W0510 17:03:54.355161       1 cacher.go:183] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0510 17:03:54.454288       1 cacher.go:183] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0510 17:03:54.564589       1 cacher.go:183] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0510 17:05:08.969605       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [7ea1e306698b2d650f44b1de9eb0091223e6eb458ffec2c00e8ee7cbd23a65b6] <==
	E0510 17:03:55.276206       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:03:55.871694       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:03:55.987664       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:03:58.191378       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:03:58.212394       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:03:58.425470       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:04:01.965111       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:04:03.261716       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:04:03.971966       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:04:10.832132       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:04:12.670771       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0510 17:04:16.085766       1 reconciler.go:360] "attacherDetacher.AttachVolume started" logger="persistentvolume-attach-detach-controller" volumeName="kubernetes.io/csi/hostpath.csi.k8s.io^e7d3de84-2dbf-11f0-8f64-526a04c276b2" nodeName="addons-088134" scheduledPods=["default/task-pv-pod"]
	E0510 17:04:16.219826       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0510 17:04:16.256244       1 shared_informer.go:350] "Waiting for caches to sync" controller="resource quota"
	I0510 17:04:16.256287       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:04:16.664231       1 shared_informer.go:350] "Waiting for caches to sync" controller="garbage collector"
	I0510 17:04:16.664278       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	E0510 17:04:32.405472       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:04:32.457506       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:04:34.451349       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:04:39.584025       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:05:02.524120       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:05:05.402620       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:05:05.614491       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:05:22.010349       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [b9b40eeed72ceafe195eff643c7bbf84bcc83631e8193e0fea1cd093852d843b] <==
	I0510 16:55:20.067315       1 server_linux.go:63] "Using iptables proxy"
	I0510 16:55:20.945955       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0510 16:55:20.946147       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 16:55:21.551531       1 server.go:254] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0510 16:55:21.551638       1 server_linux.go:145] "Using iptables Proxier"
	I0510 16:55:21.663057       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 16:55:21.663804       1 server.go:516] "Version info" version="v1.33.0"
	I0510 16:55:21.664560       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 16:55:21.666575       1 config.go:199] "Starting service config controller"
	I0510 16:55:21.666606       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 16:55:21.666641       1 config.go:105] "Starting endpoint slice config controller"
	I0510 16:55:21.666651       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 16:55:21.666666       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 16:55:21.666671       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 16:55:21.667612       1 config.go:329] "Starting node config controller"
	I0510 16:55:21.667623       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 16:55:21.766987       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 16:55:21.767713       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0510 16:55:21.767585       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 16:55:21.767545       1 shared_informer.go:357] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e353710533230f73d1cc7951b4ae81a4668224ac29497acdc584f5eece3db3ae] <==
	E0510 16:55:09.058014       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0510 16:55:09.058016       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0510 16:55:09.058038       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 16:55:09.058112       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0510 16:55:09.058141       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0510 16:55:09.058151       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0510 16:55:09.058197       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0510 16:55:09.058221       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 16:55:09.058221       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 16:55:09.058320       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 16:55:09.058348       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0510 16:55:09.058412       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0510 16:55:09.058434       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 16:55:09.058470       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 16:55:09.945178       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0510 16:55:09.945178       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0510 16:55:09.953668       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 16:55:09.991983       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0510 16:55:10.015479       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0510 16:55:10.044109       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 16:55:10.086064       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 16:55:10.128629       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0510 16:55:10.167474       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 16:55:10.197982       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I0510 16:55:11.755232       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.303552    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b97ee80eb6c812ab11c326d37d70b88df57203dd60dce7c18f9ef888620ad6ea/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b97ee80eb6c812ab11c326d37d70b88df57203dd60dce7c18f9ef888620ad6ea/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.304647    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d2699c041fe521cb4c90fde4a6bff058632d0a6d4b8d4dfb93f7157fa72b3135/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d2699c041fe521cb4c90fde4a6bff058632d0a6d4b8d4dfb93f7157fa72b3135/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.304671    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5cc71eeec5e522cb88485c3ce6e87fd6b11ca9c34e70b62ad7f70defbfee3edd/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5cc71eeec5e522cb88485c3ce6e87fd6b11ca9c34e70b62ad7f70defbfee3edd/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.306856    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0d43a02cdb348220f45039324ea7e19a8df6c295cf72628f26b10b192e5a0eb5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0d43a02cdb348220f45039324ea7e19a8df6c295cf72628f26b10b192e5a0eb5/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.306884    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2f666d60118501f57ed39682c4d1259eee21286ef8f4d476212959f9a6bac724/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2f666d60118501f57ed39682c4d1259eee21286ef8f4d476212959f9a6bac724/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.307995    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/00a9c8d931b15be6adaacdaf5b14e6e6b03d0ab75b9af4dbe6e2a871db0c322a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/00a9c8d931b15be6adaacdaf5b14e6e6b03d0ab75b9af4dbe6e2a871db0c322a/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.310240    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5d0663b663d4a7f1e10b5ff70e92edce60414265fe23a44bd8aa33f66d5aeecc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5d0663b663d4a7f1e10b5ff70e92edce60414265fe23a44bd8aa33f66d5aeecc/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.311373    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9d5f45f6a4a39d7688a9b12e68e25c30492d7c7ee7950d4711fff67d794ff081/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9d5f45f6a4a39d7688a9b12e68e25c30492d7c7ee7950d4711fff67d794ff081/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.313666    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0d43a02cdb348220f45039324ea7e19a8df6c295cf72628f26b10b192e5a0eb5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0d43a02cdb348220f45039324ea7e19a8df6c295cf72628f26b10b192e5a0eb5/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.313690    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/890e148ce1d0ed4c4c639ebd8824775ae02959821e9e9ebf4ec6588ec36de696/diff" to get inode usage: stat /var/lib/containers/storage/overlay/890e148ce1d0ed4c4c639ebd8824775ae02959821e9e9ebf4ec6588ec36de696/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.315919    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0b1526e9d600922b027d83bf47b4bd87a7764b78eeb6749eafac1202947ae909/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0b1526e9d600922b027d83bf47b4bd87a7764b78eeb6749eafac1202947ae909/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.315992    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9d5f45f6a4a39d7688a9b12e68e25c30492d7c7ee7950d4711fff67d794ff081/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9d5f45f6a4a39d7688a9b12e68e25c30492d7c7ee7950d4711fff67d794ff081/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.318151    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/890e148ce1d0ed4c4c639ebd8824775ae02959821e9e9ebf4ec6588ec36de696/diff" to get inode usage: stat /var/lib/containers/storage/overlay/890e148ce1d0ed4c4c639ebd8824775ae02959821e9e9ebf4ec6588ec36de696/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.359678    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/872b9de5a5eca48a262c731251c019e1e9d03604739d92ba33a378a34f64ba3a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/872b9de5a5eca48a262c731251c019e1e9d03604739d92ba33a378a34f64ba3a/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.366028    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2f666d60118501f57ed39682c4d1259eee21286ef8f4d476212959f9a6bac724/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2f666d60118501f57ed39682c4d1259eee21286ef8f4d476212959f9a6bac724/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.485662    1692 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896711485431717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:05:11 addons-088134 kubelet[1692]: E0510 17:05:11.485703    1692 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896711485431717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:05:13 addons-088134 kubelet[1692]: E0510 17:05:13.248082    1692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="670a8744-ab16-44a6-a1c9-0a18c96cf593"
	May 10 17:05:21 addons-088134 kubelet[1692]: E0510 17:05:21.487890    1692 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896721487626417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:05:21 addons-088134 kubelet[1692]: E0510 17:05:21.487922    1692 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896721487626417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:05:23 addons-088134 kubelet[1692]: E0510 17:05:23.246714    1692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="58e565b4-a342-41bf-94fe-2b8a1251e1d1"
	May 10 17:05:27 addons-088134 kubelet[1692]: E0510 17:05:27.247846    1692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="670a8744-ab16-44a6-a1c9-0a18c96cf593"
	May 10 17:05:31 addons-088134 kubelet[1692]: E0510 17:05:31.490708    1692 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896731490391119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:05:31 addons-088134 kubelet[1692]: E0510 17:05:31.490755    1692 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896731490391119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:05:38 addons-088134 kubelet[1692]: E0510 17:05:38.246107    1692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="58e565b4-a342-41bf-94fe-2b8a1251e1d1"
	
	
	==> storage-provisioner [6f3083ad618b6ea2836326198796611c276a0e493b0cc7dabfd052526bce9edc] <==
	W0510 17:05:14.851955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:16.854779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:16.858994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:18.862163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:18.866024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:20.869592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:20.873945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:22.876896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:22.881301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:24.885203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:24.889552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:26.892528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:26.897838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:28.901292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:28.906787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:30.909998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:30.914008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:32.917456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:32.922715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:34.925671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:34.929574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:36.932701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:36.938150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:38.941686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:05:38.946034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-088134 -n addons-088134
helpers_test.go:261: (dbg) Run:  kubectl --context addons-088134 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod ingress-nginx-admission-create-f952k ingress-nginx-admission-patch-js6jf
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-088134 describe pod nginx task-pv-pod ingress-nginx-admission-create-f952k ingress-nginx-admission-patch-js6jf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-088134 describe pod nginx task-pv-pod ingress-nginx-admission-create-f952k ingress-nginx-admission-patch-js6jf: exit status 1 (72.093634ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-088134/192.168.49.2
	Start Time:       Sat, 10 May 2025 16:57:38 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vv759 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vv759:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m2s                  default-scheduler  Successfully assigned default/nginx to addons-088134
	  Warning  Failed     7m30s                 kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m42s                 kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    109s (x5 over 8m2s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     79s (x5 over 7m30s)   kubelet            Error: ErrImagePull
	  Warning  Failed     79s (x3 over 6m14s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     13s (x16 over 7m29s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    0s (x17 over 7m29s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-088134/192.168.49.2
	Start Time:       Sat, 10 May 2025 16:57:50 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v9qc6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-v9qc6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  7m50s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-088134
	  Warning  Failed     4m12s (x2 over 5m43s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    73s (x5 over 7m50s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     43s (x3 over 6m45s)    kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     43s (x5 over 6m45s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    2s (x14 over 6m44s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2s (x14 over 6m44s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-f952k" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-js6jf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-088134 describe pod nginx task-pv-pod ingress-nginx-admission-create-f952k ingress-nginx-admission-patch-js6jf: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-088134 addons disable ingress-dns --alsologtostderr -v=1: (1.360995844s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-088134 addons disable ingress --alsologtostderr -v=1: (7.636205296s)
--- FAIL: TestAddons/parallel/Ingress (491.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (388.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0510 16:57:32.073610  729815 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0510 16:57:32.076832  729815 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0510 16:57:32.076857  729815 kapi.go:107] duration metric: took 3.27966ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 3.291266ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-088134 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-088134 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [58e565b4-a342-41bf-94fe-2b8a1251e1d1] Pending
helpers_test.go:344: "task-pv-pod" [58e565b4-a342-41bf-94fe-2b8a1251e1d1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:329: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:506: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:506: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-088134 -n addons-088134
addons_test.go:506: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-05-10 17:03:50.684907492 +0000 UTC m=+570.366694622
addons_test.go:506: (dbg) Run:  kubectl --context addons-088134 describe po task-pv-pod -n default
addons_test.go:506: (dbg) kubectl --context addons-088134 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-088134/192.168.49.2
Start Time:       Sat, 10 May 2025 16:57:50 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.30
IPs:
IP:  10.244.0.30
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v9qc6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-v9qc6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/task-pv-pod to addons-088134
Warning  Failed     2m22s (x2 over 3m53s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    90s (x4 over 6m)       kubelet            Pulling image "docker.io/nginx"
Warning  Failed     55s (x2 over 4m55s)    kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     55s (x4 over 4m55s)    kubelet            Error: ErrImagePull
Normal   BackOff    3s (x9 over 4m54s)     kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     3s (x9 over 4m54s)     kubelet            Error: ImagePullBackOff
addons_test.go:506: (dbg) Run:  kubectl --context addons-088134 logs task-pv-pod -n default
addons_test.go:506: (dbg) Non-zero exit: kubectl --context addons-088134 logs task-pv-pod -n default: exit status 1 (70.016966ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:506: kubectl --context addons-088134 logs task-pv-pod -n default: exit status 1
addons_test.go:507: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-088134
helpers_test.go:235: (dbg) docker inspect addons-088134:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bde85e095a689bd54666e2402a657000d24a2c7705d3306e0ea57357e438c2b3",
	        "Created": "2025-05-10T16:54:55.051517583Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 731712,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-05-10T16:54:55.084728242Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e9e814e304601d171cd7a05fe946703c6fbd63c3e77415c5bcfe31c3cddbbe5f",
	        "ResolvConfPath": "/var/lib/docker/containers/bde85e095a689bd54666e2402a657000d24a2c7705d3306e0ea57357e438c2b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bde85e095a689bd54666e2402a657000d24a2c7705d3306e0ea57357e438c2b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/bde85e095a689bd54666e2402a657000d24a2c7705d3306e0ea57357e438c2b3/hosts",
	        "LogPath": "/var/lib/docker/containers/bde85e095a689bd54666e2402a657000d24a2c7705d3306e0ea57357e438c2b3/bde85e095a689bd54666e2402a657000d24a2c7705d3306e0ea57357e438c2b3-json.log",
	        "Name": "/addons-088134",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-088134:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-088134",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bde85e095a689bd54666e2402a657000d24a2c7705d3306e0ea57357e438c2b3",
	                "LowerDir": "/var/lib/docker/overlay2/8daff73cd2faa3faace2a48598424ad0928cc31ae480bc324069efa2cc2dc12e-init/diff:/var/lib/docker/overlay2/d562a19931b28d74981554e3e67ffc7804c8c483ec96f024e40ef2be1bf23f73/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8daff73cd2faa3faace2a48598424ad0928cc31ae480bc324069efa2cc2dc12e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8daff73cd2faa3faace2a48598424ad0928cc31ae480bc324069efa2cc2dc12e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8daff73cd2faa3faace2a48598424ad0928cc31ae480bc324069efa2cc2dc12e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-088134",
	                "Source": "/var/lib/docker/volumes/addons-088134/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-088134",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-088134",
	                "name.minikube.sigs.k8s.io": "addons-088134",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ccf4c159a1e3d7c14c6b2af2b0e83245ce1734e599b4a1db79a0723d9527d987",
	            "SandboxKey": "/var/run/docker/netns/ccf4c159a1e3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-088134": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:95:eb:e2:a0:e5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1451ebaebe192172eaaa1efea72c06a6f6dd3a306dcb7d4f5031305b008d7ead",
	                    "EndpointID": "209ae50c65ab2696f593be13dc9ae5cbe9e907be6254d2a0be92544909791911",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-088134",
	                        "bde85e095a68"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-088134 -n addons-088134
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-088134 logs -n 25: (1.170654358s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.35.0 | 10 May 25 16:54 UTC | 10 May 25 16:54 UTC |
	| delete  | -p download-only-184104                                                                     | download-only-184104   | jenkins | v1.35.0 | 10 May 25 16:54 UTC | 10 May 25 16:54 UTC |
	| delete  | -p download-only-029562                                                                     | download-only-029562   | jenkins | v1.35.0 | 10 May 25 16:54 UTC | 10 May 25 16:54 UTC |
	| delete  | -p download-only-184104                                                                     | download-only-184104   | jenkins | v1.35.0 | 10 May 25 16:54 UTC | 10 May 25 16:54 UTC |
	| start   | --download-only -p                                                                          | download-docker-238188 | jenkins | v1.35.0 | 10 May 25 16:54 UTC |                     |
	|         | download-docker-238188                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-238188                                                                   | download-docker-238188 | jenkins | v1.35.0 | 10 May 25 16:54 UTC | 10 May 25 16:54 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-854589   | jenkins | v1.35.0 | 10 May 25 16:54 UTC |                     |
	|         | binary-mirror-854589                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37525                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-854589                                                                     | binary-mirror-854589   | jenkins | v1.35.0 | 10 May 25 16:54 UTC | 10 May 25 16:54 UTC |
	| addons  | enable dashboard -p                                                                         | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:54 UTC |                     |
	|         | addons-088134                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:54 UTC |                     |
	|         | addons-088134                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-088134 --wait=true                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:54 UTC | 10 May 25 16:57 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-088134 addons disable                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-088134 addons disable                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | -p addons-088134                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-088134 addons                                                                        | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-088134 addons                                                                        | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-088134 addons disable                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-088134 addons disable                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-088134 addons disable                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-088134 ip                                                                            | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	| addons  | addons-088134 addons disable                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-088134 ssh cat                                                                       | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | /opt/local-path-provisioner/pvc-d21bcf7d-7863-46d1-95c2-f7795a677260_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-088134 addons disable                                                                | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:58 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-088134 addons                                                                        | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-088134 addons                                                                        | addons-088134          | jenkins | v1.35.0 | 10 May 25 16:57 UTC | 10 May 25 16:57 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 16:54:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 16:54:33.602414  731104 out.go:345] Setting OutFile to fd 1 ...
	I0510 16:54:33.602878  731104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 16:54:33.602892  731104 out.go:358] Setting ErrFile to fd 2...
	I0510 16:54:33.602899  731104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 16:54:33.603213  731104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 16:54:33.603888  731104 out.go:352] Setting JSON to false
	I0510 16:54:33.604776  731104 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9421,"bootTime":1746886653,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 16:54:33.604884  731104 start.go:140] virtualization: kvm guest
	I0510 16:54:33.607067  731104 out.go:177] * [addons-088134] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 16:54:33.608426  731104 notify.go:220] Checking for updates...
	I0510 16:54:33.608457  731104 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 16:54:33.609549  731104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 16:54:33.610937  731104 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 16:54:33.612286  731104 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 16:54:33.613635  731104 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 16:54:33.615012  731104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 16:54:33.616496  731104 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 16:54:33.639029  731104 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 16:54:33.639115  731104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 16:54:33.687784  731104 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:44 SystemTime:2025-05-10 16:54:33.678668893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 16:54:33.687895  731104 docker.go:318] overlay module found
	I0510 16:54:33.689693  731104 out.go:177] * Using the docker driver based on user configuration
	I0510 16:54:33.690995  731104 start.go:304] selected driver: docker
	I0510 16:54:33.691011  731104 start.go:908] validating driver "docker" against <nil>
	I0510 16:54:33.691026  731104 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 16:54:33.692047  731104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 16:54:33.740934  731104 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:44 SystemTime:2025-05-10 16:54:33.732159464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 16:54:33.741185  731104 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0510 16:54:33.741458  731104 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 16:54:33.743486  731104 out.go:177] * Using Docker driver with root privileges
	I0510 16:54:33.744623  731104 cni.go:84] Creating CNI manager for ""
	I0510 16:54:33.744703  731104 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 16:54:33.744718  731104 start_flags.go:320] Found "CNI" CNI - setting NetworkPlugin=cni
	I0510 16:54:33.744826  731104 start.go:347] cluster config:
	{Name:addons-088134 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:addons-088134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 16:54:33.747109  731104 out.go:177] * Starting "addons-088134" primary control-plane node in "addons-088134" cluster
	I0510 16:54:33.748302  731104 cache.go:121] Beginning downloading kic base image for docker with crio
	I0510 16:54:33.749589  731104 out.go:177] * Pulling base image v0.0.46-1746731792-20718 ...
	I0510 16:54:33.750647  731104 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 16:54:33.750687  731104 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 16:54:33.750697  731104 cache.go:56] Caching tarball of preloaded images
	I0510 16:54:33.750756  731104 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local docker daemon
	I0510 16:54:33.750797  731104 preload.go:172] Found /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 16:54:33.750806  731104 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 16:54:33.751171  731104 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/config.json ...
	I0510 16:54:33.751199  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/config.json: {Name:mk8b2b968bcd8f9e3aea76561f259d04a50289d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:33.766962  731104 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 to local cache
	I0510 16:54:33.767103  731104 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local cache directory
	I0510 16:54:33.767122  731104 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local cache directory, skipping pull
	I0510 16:54:33.767126  731104 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 exists in cache, skipping pull
	I0510 16:54:33.767134  731104 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 as a tarball
	I0510 16:54:33.767142  731104 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 from local cache
	I0510 16:54:45.499105  731104 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 from cached tarball
	I0510 16:54:45.499173  731104 cache.go:230] Successfully downloaded all kic artifacts
	I0510 16:54:45.499243  731104 start.go:360] acquireMachinesLock for addons-088134: {Name:mk070a6c546592528f175388e4fddc516de6c3e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 16:54:45.499362  731104 start.go:364] duration metric: took 91.567µs to acquireMachinesLock for "addons-088134"
	I0510 16:54:45.499404  731104 start.go:93] Provisioning new machine with config: &{Name:addons-088134 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:addons-088134 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 16:54:45.499528  731104 start.go:125] createHost starting for "" (driver="docker")
	I0510 16:54:45.501519  731104 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0510 16:54:45.501774  731104 start.go:159] libmachine.API.Create for "addons-088134" (driver="docker")
	I0510 16:54:45.501809  731104 client.go:168] LocalClient.Create starting
	I0510 16:54:45.501943  731104 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem
	I0510 16:54:46.313526  731104 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem
	I0510 16:54:46.401998  731104 cli_runner.go:164] Run: docker network inspect addons-088134 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0510 16:54:46.417857  731104 cli_runner.go:211] docker network inspect addons-088134 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0510 16:54:46.417926  731104 network_create.go:284] running [docker network inspect addons-088134] to gather additional debugging logs...
	I0510 16:54:46.417955  731104 cli_runner.go:164] Run: docker network inspect addons-088134
	W0510 16:54:46.433376  731104 cli_runner.go:211] docker network inspect addons-088134 returned with exit code 1
	I0510 16:54:46.433407  731104 network_create.go:287] error running [docker network inspect addons-088134]: docker network inspect addons-088134: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-088134 not found
	I0510 16:54:46.433420  731104 network_create.go:289] output of [docker network inspect addons-088134]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-088134 not found
	
	** /stderr **
	I0510 16:54:46.433540  731104 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0510 16:54:46.450034  731104 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d280d0}
	I0510 16:54:46.450092  731104 network_create.go:124] attempt to create docker network addons-088134 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0510 16:54:46.450147  731104 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-088134 addons-088134
	I0510 16:54:46.501191  731104 network_create.go:108] docker network addons-088134 192.168.49.0/24 created
	I0510 16:54:46.501226  731104 kic.go:121] calculated static IP "192.168.49.2" for the "addons-088134" container
	I0510 16:54:46.501312  731104 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0510 16:54:46.517151  731104 cli_runner.go:164] Run: docker volume create addons-088134 --label name.minikube.sigs.k8s.io=addons-088134 --label created_by.minikube.sigs.k8s.io=true
	I0510 16:54:46.535023  731104 oci.go:103] Successfully created a docker volume addons-088134
	I0510 16:54:46.535114  731104 cli_runner.go:164] Run: docker run --rm --name addons-088134-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-088134 --entrypoint /usr/bin/test -v addons-088134:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 -d /var/lib
	I0510 16:54:50.397117  731104 cli_runner.go:217] Completed: docker run --rm --name addons-088134-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-088134 --entrypoint /usr/bin/test -v addons-088134:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 -d /var/lib: (3.86194859s)
	I0510 16:54:50.397155  731104 oci.go:107] Successfully prepared a docker volume addons-088134
	I0510 16:54:50.397190  731104 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 16:54:50.397219  731104 kic.go:194] Starting extracting preloaded images to volume ...
	I0510 16:54:50.397299  731104 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-088134:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 -I lz4 -xf /preloaded.tar -C /extractDir
	I0510 16:54:54.988647  731104 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-088134:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 -I lz4 -xf /preloaded.tar -C /extractDir: (4.591299795s)
	I0510 16:54:54.988683  731104 kic.go:203] duration metric: took 4.591460681s to extract preloaded images to volume ...
	W0510 16:54:54.988811  731104 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0510 16:54:54.988909  731104 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0510 16:54:55.036443  731104 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-088134 --name addons-088134 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-088134 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-088134 --network addons-088134 --ip 192.168.49.2 --volume addons-088134:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155
	I0510 16:54:55.322844  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Running}}
	I0510 16:54:55.339984  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:54:55.357949  731104 cli_runner.go:164] Run: docker exec addons-088134 stat /var/lib/dpkg/alternatives/iptables
	I0510 16:54:55.398886  731104 oci.go:144] the created container "addons-088134" has a running status.
	I0510 16:54:55.398921  731104 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa...
	I0510 16:54:55.614482  731104 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0510 16:54:55.635933  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:54:55.653893  731104 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0510 16:54:55.653915  731104 kic_runner.go:114] Args: [docker exec --privileged addons-088134 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0510 16:54:55.755798  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:54:55.775111  731104 machine.go:93] provisionDockerMachine start ...
	I0510 16:54:55.775216  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:55.798837  731104 main.go:141] libmachine: Using SSH client type: native
	I0510 16:54:55.799123  731104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0510 16:54:55.799141  731104 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 16:54:55.998994  731104 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-088134
	
	I0510 16:54:55.999031  731104 ubuntu.go:169] provisioning hostname "addons-088134"
	I0510 16:54:55.999090  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:56.018776  731104 main.go:141] libmachine: Using SSH client type: native
	I0510 16:54:56.019092  731104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0510 16:54:56.019120  731104 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-088134 && echo "addons-088134" | sudo tee /etc/hostname
	I0510 16:54:56.151604  731104 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-088134
	
	I0510 16:54:56.151702  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:56.169304  731104 main.go:141] libmachine: Using SSH client type: native
	I0510 16:54:56.169593  731104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0510 16:54:56.169620  731104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-088134' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-088134/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-088134' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 16:54:56.287709  731104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 16:54:56.287744  731104 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20720-722920/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-722920/.minikube}
	I0510 16:54:56.287781  731104 ubuntu.go:177] setting up certificates
	I0510 16:54:56.287797  731104 provision.go:84] configureAuth start
	I0510 16:54:56.287867  731104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-088134
	I0510 16:54:56.304724  731104 provision.go:143] copyHostCerts
	I0510 16:54:56.304824  731104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem (1123 bytes)
	I0510 16:54:56.304977  731104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem (1675 bytes)
	I0510 16:54:56.305071  731104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem (1078 bytes)
	I0510 16:54:56.305148  731104 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem org=jenkins.addons-088134 san=[127.0.0.1 192.168.49.2 addons-088134 localhost minikube]
	I0510 16:54:56.486900  731104 provision.go:177] copyRemoteCerts
	I0510 16:54:56.486976  731104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 16:54:56.487025  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:56.504796  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:54:56.592491  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 16:54:56.615042  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0510 16:54:56.637370  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 16:54:56.659450  731104 provision.go:87] duration metric: took 371.601114ms to configureAuth
	I0510 16:54:56.659485  731104 ubuntu.go:193] setting minikube options for container-runtime
	I0510 16:54:56.659679  731104 config.go:182] Loaded profile config "addons-088134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 16:54:56.659800  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:56.677955  731104 main.go:141] libmachine: Using SSH client type: native
	I0510 16:54:56.678174  731104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0510 16:54:56.678193  731104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 16:54:56.884502  731104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 16:54:56.884534  731104 machine.go:96] duration metric: took 1.109396091s to provisionDockerMachine
	I0510 16:54:56.884549  731104 client.go:171] duration metric: took 11.382729697s to LocalClient.Create
	I0510 16:54:56.884566  731104 start.go:167] duration metric: took 11.382793539s to libmachine.API.Create "addons-088134"
	I0510 16:54:56.884574  731104 start.go:293] postStartSetup for "addons-088134" (driver="docker")
	I0510 16:54:56.884584  731104 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 16:54:56.884641  731104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 16:54:56.884676  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:56.901866  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:54:56.993014  731104 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 16:54:56.996361  731104 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0510 16:54:56.996396  731104 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0510 16:54:56.996403  731104 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0510 16:54:56.996411  731104 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0510 16:54:56.996423  731104 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/addons for local assets ...
	I0510 16:54:56.996482  731104 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/files for local assets ...
	I0510 16:54:56.996505  731104 start.go:296] duration metric: took 111.925893ms for postStartSetup
	I0510 16:54:56.996830  731104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-088134
	I0510 16:54:57.013547  731104 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/config.json ...
	I0510 16:54:57.013809  731104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 16:54:57.013863  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:57.030683  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:54:57.116461  731104 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0510 16:54:57.120862  731104 start.go:128] duration metric: took 11.621310165s to createHost
	I0510 16:54:57.120892  731104 start.go:83] releasing machines lock for "addons-088134", held for 11.621515367s
	I0510 16:54:57.120956  731104 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-088134
	I0510 16:54:57.138657  731104 ssh_runner.go:195] Run: cat /version.json
	I0510 16:54:57.138695  731104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 16:54:57.138710  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:57.138781  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:54:57.156019  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:54:57.156292  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:54:57.312331  731104 ssh_runner.go:195] Run: systemctl --version
	I0510 16:54:57.316881  731104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 16:54:57.454671  731104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0510 16:54:57.459098  731104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 16:54:57.477434  731104 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0510 16:54:57.477523  731104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 16:54:57.504603  731104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0510 16:54:57.504624  731104 start.go:495] detecting cgroup driver to use...
	I0510 16:54:57.504657  731104 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0510 16:54:57.504707  731104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 16:54:57.519798  731104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 16:54:57.530382  731104 docker.go:225] disabling cri-docker service (if available) ...
	I0510 16:54:57.530440  731104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 16:54:57.543133  731104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 16:54:57.556522  731104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 16:54:57.633473  731104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 16:54:57.714492  731104 docker.go:241] disabling docker service ...
	I0510 16:54:57.714563  731104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 16:54:57.733118  731104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 16:54:57.743768  731104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 16:54:57.825920  731104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 16:54:57.910593  731104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 16:54:57.921432  731104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 16:54:57.936422  731104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 16:54:57.936476  731104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 16:54:57.945569  731104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 16:54:57.945642  731104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 16:54:57.954654  731104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 16:54:57.963785  731104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 16:54:57.972779  731104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 16:54:57.981140  731104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 16:54:57.990026  731104 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 16:54:58.004801  731104 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 16:54:58.013835  731104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 16:54:58.022070  731104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 16:54:58.029832  731104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 16:54:58.104384  731104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 16:54:58.214495  731104 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 16:54:58.214593  731104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 16:54:58.218036  731104 start.go:563] Will wait 60s for crictl version
	I0510 16:54:58.218095  731104 ssh_runner.go:195] Run: which crictl
	I0510 16:54:58.221492  731104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 16:54:58.256903  731104 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0510 16:54:58.257003  731104 ssh_runner.go:195] Run: crio --version
	I0510 16:54:58.293778  731104 ssh_runner.go:195] Run: crio --version
	I0510 16:54:58.329347  731104 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.24.6 ...
	I0510 16:54:58.330515  731104 cli_runner.go:164] Run: docker network inspect addons-088134 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0510 16:54:58.346693  731104 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0510 16:54:58.350407  731104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 16:54:58.360985  731104 kubeadm.go:875] updating cluster {Name:addons-088134 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:addons-088134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 16:54:58.361098  731104 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 16:54:58.361139  731104 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 16:54:58.423733  731104 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 16:54:58.423757  731104 crio.go:433] Images already preloaded, skipping extraction
	I0510 16:54:58.423815  731104 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 16:54:58.456638  731104 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 16:54:58.456660  731104 cache_images.go:84] Images are preloaded, skipping loading
	I0510 16:54:58.456670  731104 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.33.0 crio true true} ...
	I0510 16:54:58.456782  731104 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-088134 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:addons-088134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 16:54:58.456844  731104 ssh_runner.go:195] Run: crio config
	I0510 16:54:58.499495  731104 cni.go:84] Creating CNI manager for ""
	I0510 16:54:58.499519  731104 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 16:54:58.499532  731104 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0510 16:54:58.499555  731104 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-088134 NodeName:addons-088134 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 16:54:58.499675  731104 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-088134"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 16:54:58.499738  731104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 16:54:58.508314  731104 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 16:54:58.508384  731104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 16:54:58.516925  731104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0510 16:54:58.533458  731104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 16:54:58.549799  731104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0510 16:54:58.566144  731104 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0510 16:54:58.569436  731104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 16:54:58.579596  731104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 16:54:58.653298  731104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 16:54:58.665995  731104 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134 for IP: 192.168.49.2
	I0510 16:54:58.666019  731104 certs.go:194] generating shared ca certs ...
	I0510 16:54:58.666049  731104 certs.go:226] acquiring lock for ca certs: {Name:mk27922925b9822e089551ad68cc2984cd622bc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:58.666196  731104 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key
	I0510 16:54:58.875877  731104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt ...
	I0510 16:54:58.875913  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt: {Name:mk058140c8b275beb4e709bae4cf0b29ea3c1643 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:58.876129  731104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key ...
	I0510 16:54:58.876147  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key: {Name:mk089d1de06bb5005a6634bbdb0baf0d9fcc36f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:58.876258  731104 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key
	I0510 16:54:59.404697  731104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.crt ...
	I0510 16:54:59.404730  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.crt: {Name:mk37496ac2715c4b2c8e1aa8497c599fc431e991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:59.404930  731104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key ...
	I0510 16:54:59.404946  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key: {Name:mkb49d83aaf0d3fdf9d7bd45fb3792a7571b2813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:59.405049  731104 certs.go:256] generating profile certs ...
	I0510 16:54:59.405113  731104 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.key
	I0510 16:54:59.405127  731104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt with IP's: []
	I0510 16:54:59.432971  731104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt ...
	I0510 16:54:59.433002  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: {Name:mkdfebd11f87ceef8a84d71d85397bcb519642fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:59.433157  731104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.key ...
	I0510 16:54:59.433169  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.key: {Name:mk21445d8e5e4bd7fb61273d95b0c609006fbbbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:54:59.433237  731104 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.key.a5670b41
	I0510 16:54:59.433255  731104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.crt.a5670b41 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0510 16:55:00.101378  731104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.crt.a5670b41 ...
	I0510 16:55:00.101415  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.crt.a5670b41: {Name:mkebf309fe9c46c35d1c831ef7e73fe547760fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:55:00.101599  731104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.key.a5670b41 ...
	I0510 16:55:00.101624  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.key.a5670b41: {Name:mke290ab11de1b175dfe7c41149e6881dcd536fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:55:00.101700  731104 certs.go:381] copying /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.crt.a5670b41 -> /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.crt
	I0510 16:55:00.101779  731104 certs.go:385] copying /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.key.a5670b41 -> /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.key
	I0510 16:55:00.101827  731104 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.key
	I0510 16:55:00.101851  731104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.crt with IP's: []
	I0510 16:55:00.363898  731104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.crt ...
	I0510 16:55:00.363940  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.crt: {Name:mk957fcdf29ae7c595de720ac14532ca70e2807a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:55:00.364115  731104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.key ...
	I0510 16:55:00.364130  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.key: {Name:mk7f0542185dfffaf9832a3d9b880ca12a5ed240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:55:00.364301  731104 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 16:55:00.364337  731104 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem (1078 bytes)
	I0510 16:55:00.364365  731104 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem (1123 bytes)
	I0510 16:55:00.364388  731104 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem (1675 bytes)
	I0510 16:55:00.365123  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 16:55:00.388385  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0510 16:55:00.411033  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 16:55:00.433124  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0510 16:55:00.455370  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0510 16:55:00.477264  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 16:55:00.499371  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 16:55:00.521583  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0510 16:55:00.543430  731104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 16:55:00.565317  731104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 16:55:00.581640  731104 ssh_runner.go:195] Run: openssl version
	I0510 16:55:00.587196  731104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 16:55:00.596069  731104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 16:55:00.599525  731104 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 16:54 /usr/share/ca-certificates/minikubeCA.pem
	I0510 16:55:00.599582  731104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 16:55:00.606273  731104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 16:55:00.614976  731104 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 16:55:00.617997  731104 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0510 16:55:00.618057  731104 kubeadm.go:392] StartCluster: {Name:addons-088134 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:addons-088134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 16:55:00.618145  731104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 16:55:00.618189  731104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 16:55:00.651684  731104 cri.go:89] found id: ""
	I0510 16:55:00.651766  731104 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 16:55:00.660318  731104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0510 16:55:00.668494  731104 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0510 16:55:00.668565  731104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0510 16:55:00.677083  731104 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0510 16:55:00.677105  731104 kubeadm.go:157] found existing configuration files:
	
	I0510 16:55:00.677156  731104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0510 16:55:00.685265  731104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0510 16:55:00.685337  731104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0510 16:55:00.693157  731104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0510 16:55:00.700925  731104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0510 16:55:00.700977  731104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0510 16:55:00.708768  731104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0510 16:55:00.716607  731104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0510 16:55:00.716671  731104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0510 16:55:00.724554  731104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0510 16:55:00.732612  731104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0510 16:55:00.732667  731104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0510 16:55:00.740335  731104 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.33.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0510 16:55:00.776040  731104 kubeadm.go:310] [init] Using Kubernetes version: v1.33.0
	I0510 16:55:00.776115  731104 kubeadm.go:310] [preflight] Running pre-flight checks
	I0510 16:55:00.794687  731104 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0510 16:55:00.794806  731104 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1081-gcp
	I0510 16:55:00.794875  731104 kubeadm.go:310] OS: Linux
	I0510 16:55:00.794954  731104 kubeadm.go:310] CGROUPS_CPU: enabled
	I0510 16:55:00.795042  731104 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0510 16:55:00.795115  731104 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0510 16:55:00.795158  731104 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0510 16:55:00.795201  731104 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0510 16:55:00.795243  731104 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0510 16:55:00.795306  731104 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0510 16:55:00.795374  731104 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0510 16:55:00.795447  731104 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0510 16:55:00.849233  731104 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0510 16:55:00.849414  731104 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0510 16:55:00.849552  731104 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0510 16:55:00.856905  731104 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0510 16:55:00.860444  731104 out.go:235]   - Generating certificates and keys ...
	I0510 16:55:00.860589  731104 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0510 16:55:00.860684  731104 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0510 16:55:00.926590  731104 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0510 16:55:01.184213  731104 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0510 16:55:01.236251  731104 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0510 16:55:01.781902  731104 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0510 16:55:02.158552  731104 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0510 16:55:02.158687  731104 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-088134 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0510 16:55:02.453692  731104 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0510 16:55:02.453878  731104 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-088134 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0510 16:55:02.558010  731104 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0510 16:55:03.081854  731104 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0510 16:55:03.250515  731104 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0510 16:55:03.250663  731104 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0510 16:55:03.599764  731104 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0510 16:55:03.615264  731104 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0510 16:55:04.111684  731104 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0510 16:55:04.271522  731104 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0510 16:55:04.857498  731104 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0510 16:55:04.858004  731104 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0510 16:55:04.860122  731104 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0510 16:55:04.862371  731104 out.go:235]   - Booting up control plane ...
	I0510 16:55:04.862482  731104 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0510 16:55:04.862602  731104 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0510 16:55:04.862666  731104 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0510 16:55:04.871557  731104 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0510 16:55:04.876804  731104 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0510 16:55:04.876877  731104 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0510 16:55:04.954915  731104 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0510 16:55:04.955106  731104 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0510 16:55:05.956672  731104 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001907371s
	I0510 16:55:05.960854  731104 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0510 16:55:05.960980  731104 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0510 16:55:05.961103  731104 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0510 16:55:05.961192  731104 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0510 16:55:08.245996  731104 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.285048452s
	I0510 16:55:09.059145  731104 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.098287186s
	I0510 16:55:10.462702  731104 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.501750047s
	I0510 16:55:10.474885  731104 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0510 16:55:10.484434  731104 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0510 16:55:10.503639  731104 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0510 16:55:10.503931  731104 kubeadm.go:310] [mark-control-plane] Marking the node addons-088134 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0510 16:55:10.512234  731104 kubeadm.go:310] [bootstrap-token] Using token: ngtmmz.nuzx3d2w9dfre1k4
	I0510 16:55:10.513714  731104 out.go:235]   - Configuring RBAC rules ...
	I0510 16:55:10.513877  731104 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0510 16:55:10.517540  731104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0510 16:55:10.525111  731104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0510 16:55:10.527449  731104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0510 16:55:10.530125  731104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0510 16:55:10.532559  731104 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0510 16:55:10.869284  731104 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0510 16:55:11.287324  731104 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0510 16:55:11.868144  731104 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0510 16:55:11.868992  731104 kubeadm.go:310] 
	I0510 16:55:11.869077  731104 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0510 16:55:11.869087  731104 kubeadm.go:310] 
	I0510 16:55:11.869186  731104 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0510 16:55:11.869195  731104 kubeadm.go:310] 
	I0510 16:55:11.869225  731104 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0510 16:55:11.869303  731104 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0510 16:55:11.869366  731104 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0510 16:55:11.869407  731104 kubeadm.go:310] 
	I0510 16:55:11.869496  731104 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0510 16:55:11.869507  731104 kubeadm.go:310] 
	I0510 16:55:11.869564  731104 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0510 16:55:11.869575  731104 kubeadm.go:310] 
	I0510 16:55:11.869646  731104 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0510 16:55:11.869742  731104 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0510 16:55:11.869839  731104 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0510 16:55:11.869848  731104 kubeadm.go:310] 
	I0510 16:55:11.869953  731104 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0510 16:55:11.870052  731104 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0510 16:55:11.870061  731104 kubeadm.go:310] 
	I0510 16:55:11.870159  731104 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ngtmmz.nuzx3d2w9dfre1k4 \
	I0510 16:55:11.870297  731104 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cab2ae3dd65908c2d6393ff2fdde0e4e0dbad5e0ec941434a6c816c7eedead32 \
	I0510 16:55:11.870333  731104 kubeadm.go:310] 	--control-plane 
	I0510 16:55:11.870343  731104 kubeadm.go:310] 
	I0510 16:55:11.870433  731104 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0510 16:55:11.870451  731104 kubeadm.go:310] 
	I0510 16:55:11.870534  731104 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ngtmmz.nuzx3d2w9dfre1k4 \
	I0510 16:55:11.870650  731104 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cab2ae3dd65908c2d6393ff2fdde0e4e0dbad5e0ec941434a6c816c7eedead32 
	I0510 16:55:11.872926  731104 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0510 16:55:11.873136  731104 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1081-gcp\n", err: exit status 1
	I0510 16:55:11.873232  731104 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0510 16:55:11.873273  731104 cni.go:84] Creating CNI manager for ""
	I0510 16:55:11.873296  731104 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 16:55:11.874878  731104 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0510 16:55:11.876071  731104 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0510 16:55:11.879859  731104 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.33.0/kubectl ...
	I0510 16:55:11.879879  731104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0510 16:55:11.896739  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0510 16:55:12.096196  731104 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0510 16:55:12.096311  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:12.096339  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-088134 minikube.k8s.io/updated_at=2025_05_10T16_55_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4 minikube.k8s.io/name=addons-088134 minikube.k8s.io/primary=true
	I0510 16:55:12.257891  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:12.258014  731104 ops.go:34] apiserver oom_adj: -16
	I0510 16:55:12.758409  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:13.258331  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:13.758876  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:14.258197  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:14.758683  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:15.258410  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:15.758026  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:16.258119  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:16.758231  731104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.33.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0510 16:55:16.825494  731104 kubeadm.go:1105] duration metric: took 4.729247498s to wait for elevateKubeSystemPrivileges
	I0510 16:55:16.825539  731104 kubeadm.go:394] duration metric: took 16.207488619s to StartCluster
	I0510 16:55:16.825578  731104 settings.go:142] acquiring lock: {Name:mkb5ef074e3901ac961cf1a29314fa6c725c1890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:55:16.825744  731104 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 16:55:16.826244  731104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/kubeconfig: {Name:mk9fb87a04495b85d7d2d831cf7e181b64e065fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 16:55:16.826504  731104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0510 16:55:16.826499  731104 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 16:55:16.826542  731104 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0510 16:55:16.826657  731104 addons.go:69] Setting yakd=true in profile "addons-088134"
	I0510 16:55:16.826678  731104 addons.go:238] Setting addon yakd=true in "addons-088134"
	I0510 16:55:16.826692  731104 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-088134"
	I0510 16:55:16.826714  731104 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-088134"
	I0510 16:55:16.826724  731104 addons.go:69] Setting metrics-server=true in profile "addons-088134"
	I0510 16:55:16.826724  731104 config.go:182] Loaded profile config "addons-088134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 16:55:16.826751  731104 addons.go:238] Setting addon metrics-server=true in "addons-088134"
	I0510 16:55:16.826760  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.826775  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.826785  731104 addons.go:69] Setting storage-provisioner=true in profile "addons-088134"
	I0510 16:55:16.826800  731104 addons.go:238] Setting addon storage-provisioner=true in "addons-088134"
	I0510 16:55:16.826830  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.827021  731104 addons.go:69] Setting volcano=true in profile "addons-088134"
	I0510 16:55:16.827073  731104 addons.go:238] Setting addon volcano=true in "addons-088134"
	I0510 16:55:16.827113  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.827185  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.827274  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.827286  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.827494  731104 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-088134"
	I0510 16:55:16.827525  731104 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-088134"
	I0510 16:55:16.827536  731104 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-088134"
	I0510 16:55:16.827553  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.827561  731104 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-088134"
	I0510 16:55:16.827626  731104 addons.go:69] Setting registry=true in profile "addons-088134"
	I0510 16:55:16.827659  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.827651  731104 addons.go:238] Setting addon registry=true in "addons-088134"
	I0510 16:55:16.827691  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.827848  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.827994  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.827985  731104 addons.go:69] Setting volumesnapshots=true in profile "addons-088134"
	I0510 16:55:16.828012  731104 addons.go:238] Setting addon volumesnapshots=true in "addons-088134"
	I0510 16:55:16.828044  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.828229  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.828471  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.826714  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.829075  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.829154  731104 addons.go:69] Setting ingress=true in profile "addons-088134"
	I0510 16:55:16.829178  731104 addons.go:238] Setting addon ingress=true in "addons-088134"
	I0510 16:55:16.829339  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.831536  731104 addons.go:69] Setting gcp-auth=true in profile "addons-088134"
	I0510 16:55:16.831960  731104 mustload.go:65] Loading cluster: addons-088134
	I0510 16:55:16.832187  731104 config.go:182] Loaded profile config "addons-088134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 16:55:16.832969  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.833496  731104 out.go:177] * Verifying Kubernetes components...
	I0510 16:55:16.833548  731104 addons.go:69] Setting ingress-dns=true in profile "addons-088134"
	I0510 16:55:16.833595  731104 addons.go:238] Setting addon ingress-dns=true in "addons-088134"
	I0510 16:55:16.833646  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.835281  731104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 16:55:16.829084  731104 addons.go:69] Setting default-storageclass=true in profile "addons-088134"
	I0510 16:55:16.838706  731104 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-088134"
	I0510 16:55:16.839118  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.829097  731104 addons.go:69] Setting cloud-spanner=true in profile "addons-088134"
	I0510 16:55:16.829107  731104 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-088134"
	I0510 16:55:16.839216  731104 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-088134"
	I0510 16:55:16.839267  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.839738  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.839946  731104 addons.go:238] Setting addon cloud-spanner=true in "addons-088134"
	I0510 16:55:16.839997  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.826675  731104 addons.go:69] Setting inspektor-gadget=true in profile "addons-088134"
	I0510 16:55:16.840429  731104 addons.go:238] Setting addon inspektor-gadget=true in "addons-088134"
	I0510 16:55:16.840470  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.852208  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.852866  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.852930  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.854375  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.863036  731104 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 16:55:16.864562  731104 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 16:55:16.864588  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0510 16:55:16.864663  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.877554  731104 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.1
	I0510 16:55:16.879102  731104 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0510 16:55:16.879127  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0510 16:55:16.879194  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.881290  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0510 16:55:16.882489  731104 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0510 16:55:16.882515  731104 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0510 16:55:16.882603  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.886384  731104 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0510 16:55:16.887782  731104 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0510 16:55:16.887808  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0510 16:55:16.887875  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	W0510 16:55:16.893100  731104 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0510 16:55:16.899547  731104 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0510 16:55:16.900892  731104 addons.go:238] Setting addon default-storageclass=true in "addons-088134"
	I0510 16:55:16.900947  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.900958  731104 out.go:177]   - Using image docker.io/registry:3.0.0
	I0510 16:55:16.901404  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.902382  731104 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0510 16:55:16.902413  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0510 16:55:16.902468  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.904344  731104 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-088134"
	I0510 16:55:16.904438  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.904976  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:16.911787  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:16.912677  731104 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0510 16:55:16.914101  731104 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0510 16:55:16.914121  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0510 16:55:16.914178  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.920248  731104 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.39.0
	I0510 16:55:16.921656  731104 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0510 16:55:16.921682  731104 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0510 16:55:16.921759  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.929016  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0510 16:55:16.929170  731104 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0510 16:55:16.930198  731104 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0510 16:55:16.930218  731104 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0510 16:55:16.930293  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.931849  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0510 16:55:16.934178  731104 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0510 16:55:16.935098  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0510 16:55:16.935460  731104 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0510 16:55:16.935487  731104 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0510 16:55:16.935623  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.937330  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:16.937750  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0510 16:55:16.939221  731104 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0510 16:55:16.940318  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0510 16:55:16.941169  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:16.941379  731104 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.33
	I0510 16:55:16.941379  731104 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0510 16:55:16.942574  731104 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0510 16:55:16.942593  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0510 16:55:16.942650  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.943383  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0510 16:55:16.943805  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:16.950701  731104 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0510 16:55:16.953247  731104 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0510 16:55:16.953268  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0510 16:55:16.953401  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.960409  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0510 16:55:16.961805  731104 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0510 16:55:16.962800  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:16.964552  731104 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0510 16:55:16.964573  731104 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0510 16:55:16.964640  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.969397  731104 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0510 16:55:16.969426  731104 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0510 16:55:16.969490  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.976404  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:16.979521  731104 out.go:177]   - Using image docker.io/busybox:stable
	I0510 16:55:16.983120  731104 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0510 16:55:16.984026  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:16.985849  731104 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0510 16:55:16.985870  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0510 16:55:16.985937  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:16.988989  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:16.992937  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:17.006373  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:17.007664  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:17.009414  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:17.010031  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:17.011853  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:17.012061  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:17.157907  731104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0510 16:55:17.351306  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 16:55:17.352546  731104 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0510 16:55:17.352623  731104 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0510 16:55:17.362670  731104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 16:55:17.445441  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0510 16:55:17.447281  731104 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0510 16:55:17.447311  731104 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0510 16:55:17.455710  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0510 16:55:17.547883  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0510 16:55:17.644008  731104 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0510 16:55:17.644036  731104 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0510 16:55:17.648806  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0510 16:55:17.652823  731104 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0510 16:55:17.652897  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0510 16:55:17.660326  731104 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0510 16:55:17.660408  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0510 16:55:17.660669  731104 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0510 16:55:17.660724  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0510 16:55:17.662861  731104 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0510 16:55:17.662918  731104 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0510 16:55:17.666875  731104 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0510 16:55:17.666952  731104 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0510 16:55:17.746071  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0510 16:55:17.865629  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0510 16:55:17.944540  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0510 16:55:17.958611  731104 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0510 16:55:17.958640  731104 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0510 16:55:17.967241  731104 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0510 16:55:17.967329  731104 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0510 16:55:18.044082  731104 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0510 16:55:18.044168  731104 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0510 16:55:18.049955  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0510 16:55:18.054187  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0510 16:55:18.061195  731104 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0510 16:55:18.061229  731104 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0510 16:55:18.358854  731104 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0510 16:55:18.358955  731104 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0510 16:55:18.550305  731104 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 16:55:18.550337  731104 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0510 16:55:18.765171  731104 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0510 16:55:18.765260  731104 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0510 16:55:18.845747  731104 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0510 16:55:18.845785  731104 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0510 16:55:18.862941  731104 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0510 16:55:18.862969  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0510 16:55:19.047880  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 16:55:19.149161  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0510 16:55:19.248070  731104 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0510 16:55:19.248157  731104 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0510 16:55:19.258602  731104 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0510 16:55:19.258703  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0510 16:55:19.354527  731104 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.33.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.196560823s)
	I0510 16:55:19.354657  731104 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0510 16:55:19.451834  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0510 16:55:19.745338  731104 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0510 16:55:19.745381  731104 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0510 16:55:19.962402  731104 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0510 16:55:19.962433  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0510 16:55:20.144953  731104 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-088134" context rescaled to 1 replicas
	I0510 16:55:20.151893  731104 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0510 16:55:20.151997  731104 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0510 16:55:20.345483  731104 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0510 16:55:20.345590  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0510 16:55:20.547215  731104 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0510 16:55:20.547243  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0510 16:55:20.657266  731104 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0510 16:55:20.657367  731104 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0510 16:55:20.867351  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0510 16:55:21.665349  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.313945073s)
	I0510 16:55:21.665403  731104 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.302691389s)
	I0510 16:55:21.665439  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.21988542s)
	I0510 16:55:21.665469  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.209736384s)
	I0510 16:55:21.665604  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.117647518s)
	I0510 16:55:21.667379  731104 node_ready.go:35] waiting up to 6m0s for node "addons-088134" to be "Ready" ...
	I0510 16:55:23.258680  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.609829618s)
	I0510 16:55:23.258727  731104 addons.go:479] Verifying addon ingress=true in "addons-088134"
	I0510 16:55:23.258810  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.512646003s)
	I0510 16:55:23.258870  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.393203066s)
	I0510 16:55:23.258946  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.314309627s)
	I0510 16:55:23.259038  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.209048133s)
	I0510 16:55:23.259116  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.204882771s)
	I0510 16:55:23.259142  731104 addons.go:479] Verifying addon registry=true in "addons-088134"
	I0510 16:55:23.260333  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.212307895s)
	I0510 16:55:23.260364  731104 addons.go:479] Verifying addon metrics-server=true in "addons-088134"
	I0510 16:55:23.260448  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.11051481s)
	I0510 16:55:23.260584  731104 out.go:177] * Verifying ingress addon...
	I0510 16:55:23.261537  731104 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-088134 service yakd-dashboard -n yakd-dashboard
	
	I0510 16:55:23.262860  731104 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0510 16:55:23.264216  731104 out.go:177] * Verifying registry addon...
	I0510 16:55:23.266629  731104 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0510 16:55:23.266941  731104 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0510 16:55:23.266962  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:23.361551  731104 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0510 16:55:23.361580  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0510 16:55:23.672644  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:23.849227  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:23.849519  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:23.861712  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.409766015s)
	W0510 16:55:23.861773  731104 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0510 16:55:23.861809  731104 retry.go:31] will retry after 180.682115ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0510 16:55:24.042702  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0510 16:55:24.147294  731104 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0510 16:55:24.147376  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:24.172824  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:24.266600  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:24.269049  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:24.347190  731104 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0510 16:55:24.371297  731104 addons.go:238] Setting addon gcp-auth=true in "addons-088134"
	I0510 16:55:24.371363  731104 host.go:66] Checking if "addons-088134" exists ...
	I0510 16:55:24.371962  731104 cli_runner.go:164] Run: docker container inspect addons-088134 --format={{.State.Status}}
	I0510 16:55:24.391474  731104 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0510 16:55:24.391566  731104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-088134
	I0510 16:55:24.410020  731104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/addons-088134/id_rsa Username:docker}
	I0510 16:55:24.456585  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.589093175s)
	I0510 16:55:24.456644  731104 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-088134"
	I0510 16:55:24.458366  731104 out.go:177] * Verifying csi-hostpath-driver addon...
	I0510 16:55:24.460532  731104 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0510 16:55:24.467705  731104 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0510 16:55:24.467733  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:24.766612  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:24.769401  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:24.963365  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:25.265827  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:25.268740  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:25.463497  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:25.766155  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:25.769173  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:25.964179  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:26.171009  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:26.266047  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:26.269140  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:26.464062  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:26.766957  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:26.769030  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:26.802556  731104 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.759792574s)
	I0510 16:55:26.802596  731104 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.411064974s)
	I0510 16:55:26.804636  731104 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0510 16:55:26.805945  731104 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0510 16:55:26.807166  731104 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0510 16:55:26.807183  731104 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0510 16:55:26.824759  731104 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0510 16:55:26.824785  731104 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0510 16:55:26.841624  731104 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0510 16:55:26.841646  731104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0510 16:55:26.858490  731104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0510 16:55:26.963638  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:27.186856  731104 addons.go:479] Verifying addon gcp-auth=true in "addons-088134"
	I0510 16:55:27.188824  731104 out.go:177] * Verifying gcp-auth addon...
	I0510 16:55:27.190931  731104 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0510 16:55:27.192914  731104 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0510 16:55:27.192937  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:27.267266  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:27.268925  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:27.463855  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:27.694354  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:27.766098  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:27.769186  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:27.964119  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:28.194514  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:28.266593  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:28.269473  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:28.465037  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:28.670986  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:28.694823  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:28.766733  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:28.768809  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:28.963822  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:29.194234  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:29.266716  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:29.268942  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:29.463900  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:29.693784  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:29.766724  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:29.768851  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:29.963971  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:30.195156  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:30.296249  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:30.296387  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:30.464347  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:30.694574  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:30.766523  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:30.769634  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:30.963680  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:31.170470  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:31.194338  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:31.266491  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:31.269499  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:31.464458  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:31.693868  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:31.765983  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:31.769130  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:31.964501  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:32.194742  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:32.266818  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:32.268827  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:32.463970  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:32.694466  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:32.766193  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:32.769277  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:32.964262  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:33.171014  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:33.193723  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:33.266574  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:33.269571  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:33.463163  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:33.694007  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:33.765672  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:33.769711  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:33.963552  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:34.194134  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:34.266064  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:34.269030  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:34.464032  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:34.694753  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:34.766309  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:34.769211  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:34.965331  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:35.194029  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:35.266110  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:35.269107  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:35.464127  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:35.670826  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:35.694341  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:35.766409  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:35.769432  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:35.964137  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:36.194922  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:36.267029  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:36.268884  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:36.463869  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:36.694691  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:36.766631  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:36.768710  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:36.964099  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:37.194656  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:37.266442  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:37.269423  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:37.464258  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:37.671021  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:37.693809  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:37.766789  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:37.768883  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:37.963849  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:38.195067  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:38.297222  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:38.297477  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:38.463792  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:38.694689  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:38.766464  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:38.769598  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:38.963475  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:39.194090  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:39.266016  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:39.268991  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:39.463974  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:39.694465  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:39.766154  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:39.769220  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:39.964346  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:40.170198  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:40.194371  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:40.266512  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:40.269531  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:40.463576  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:40.694698  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:40.766546  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:40.769524  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:40.963360  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:41.194858  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:41.266612  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:41.269725  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:41.463522  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:41.694069  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:41.765825  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:41.769891  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:41.964400  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:42.170273  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:42.194296  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:42.266705  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:42.268877  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:42.464026  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:42.694583  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:42.766394  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:42.769577  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:42.963349  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:43.194099  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:43.265690  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:43.269811  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:43.463786  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:43.694457  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:43.766950  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:43.770137  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:43.963964  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:44.170818  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:44.194469  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:44.266619  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:44.269598  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:44.463466  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:44.694164  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:44.765779  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:44.769850  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:44.963551  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:45.193855  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:45.266653  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:45.268767  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:45.463691  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:45.694116  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:45.765712  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:45.769878  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:45.963863  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:46.170946  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:46.194677  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:46.266845  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:46.268824  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:46.463698  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:46.694739  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:46.766584  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:46.769589  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:46.963824  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:47.194543  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:47.266434  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:47.269184  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:47.464069  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:47.694826  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:47.766768  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:47.768836  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:47.963717  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:48.194473  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:48.266220  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:48.269241  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:48.464228  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:48.669888  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:48.694816  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:48.766527  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:48.769531  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:48.963649  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:49.193883  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:49.266951  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:49.269122  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:49.463899  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:49.694559  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:49.766693  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:49.768735  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:49.963661  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:50.194678  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:50.266509  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:50.269502  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:50.463508  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:50.670173  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:50.693995  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:50.765780  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:50.769898  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:50.963745  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:51.194232  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:51.266056  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:51.268830  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:51.463784  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:51.694282  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:51.766202  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:51.769218  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:51.963960  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:52.194736  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:52.266744  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:52.269101  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:52.464445  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:52.694111  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:52.765968  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:52.768888  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:52.963893  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:53.170713  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:53.194535  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:53.266508  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:53.269581  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:53.463491  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:53.694198  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:53.766076  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:53.769148  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:53.964050  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:54.193639  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:54.267258  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:54.269280  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:54.464534  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:54.694081  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:54.765842  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:54.769000  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:54.964129  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:55.170792  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:55.194575  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:55.266589  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:55.269558  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:55.463513  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:55.693587  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:55.766342  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:55.769476  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:55.964542  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:56.194746  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:56.266570  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:56.269750  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:56.463637  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:56.694178  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:56.765942  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:56.769168  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:56.964645  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:57.193851  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:57.266858  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:57.268998  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:57.463938  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:55:57.670903  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:55:57.694745  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:57.766496  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:57.769693  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:57.963595  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:58.194430  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:58.266468  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:58.269548  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:58.463467  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:58.694363  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:58.766345  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:58.769523  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:58.964567  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:59.194436  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:59.266591  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:59.269992  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:59.463845  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:55:59.694133  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:55:59.765932  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:55:59.768966  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:55:59.963971  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0510 16:56:00.170650  731104 node_ready.go:57] node "addons-088134" has "Ready":"False" status (will retry)
	I0510 16:56:00.194379  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:00.266182  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:00.269074  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:00.464295  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:00.694680  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:00.794998  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:00.795245  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:00.964394  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:01.193898  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:01.267725  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:01.276517  731104 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0510 16:56:01.276547  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:01.464271  731104 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0510 16:56:01.464303  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:01.670506  731104 node_ready.go:49] node "addons-088134" is "Ready"
	I0510 16:56:01.670541  731104 node_ready.go:38] duration metric: took 40.003132925s for node "addons-088134" to be "Ready" ...
	I0510 16:56:01.670563  731104 api_server.go:52] waiting for apiserver process to appear ...
	I0510 16:56:01.670627  731104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 16:56:01.685377  731104 api_server.go:72] duration metric: took 44.858758077s to wait for apiserver process to appear ...
	I0510 16:56:01.685410  731104 api_server.go:88] waiting for apiserver healthz status ...
	I0510 16:56:01.685439  731104 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0510 16:56:01.691167  731104 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0510 16:56:01.692299  731104 api_server.go:141] control plane version: v1.33.0
	I0510 16:56:01.692331  731104 api_server.go:131] duration metric: took 6.91101ms to wait for apiserver health ...
	I0510 16:56:01.692345  731104 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 16:56:01.693473  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:01.696016  731104 system_pods.go:59] 19 kube-system pods found
	I0510 16:56:01.696055  731104 system_pods.go:61] "amd-gpu-device-plugin-wkh8g" [e739ed11-e98a-4e6d-9105-14c2c5463669] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0510 16:56:01.696067  731104 system_pods.go:61] "coredns-674b8bbfcf-n4msm" [0cb19c4f-40cd-4145-98c3-f1710d609272] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 16:56:01.696080  731104 system_pods.go:61] "csi-hostpath-attacher-0" [a26eced3-d492-41f0-9f43-f163252af7ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0510 16:56:01.696093  731104 system_pods.go:61] "csi-hostpath-resizer-0" [bbb9ed99-10a0-49cf-a4ff-c1ec27a30a5a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0510 16:56:01.696103  731104 system_pods.go:61] "csi-hostpathplugin-cbgm9" [5465e1cc-996f-4ede-a2cf-c3eaaa0b37de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0510 16:56:01.696114  731104 system_pods.go:61] "etcd-addons-088134" [d95aa406-9fc3-4735-80c5-f9f17cde659d] Running
	I0510 16:56:01.696123  731104 system_pods.go:61] "kindnet-9929f" [f012534c-b774-4c7c-8844-d37bddf2b6e4] Running
	I0510 16:56:01.696131  731104 system_pods.go:61] "kube-apiserver-addons-088134" [91981f1a-14b3-4e5a-99e6-9abc8900080e] Running
	I0510 16:56:01.696139  731104 system_pods.go:61] "kube-controller-manager-addons-088134" [417095d9-ac03-4918-bcb6-91996522918b] Running
	I0510 16:56:01.696151  731104 system_pods.go:61] "kube-ingress-dns-minikube" [2f978a66-7d99-44f4-a58a-d0df66466df0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0510 16:56:01.696160  731104 system_pods.go:61] "kube-proxy-rwb2j" [db4b4b5c-2ed3-46a1-82c6-d3c6bc3cbb94] Running
	I0510 16:56:01.696169  731104 system_pods.go:61] "kube-scheduler-addons-088134" [2ef52c7c-9ca2-447b-84be-d60312db1962] Running
	I0510 16:56:01.696177  731104 system_pods.go:61] "metrics-server-7fbb699795-mj6nz" [c09c57d8-2189-467d-ba7e-6e516538365f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 16:56:01.696191  731104 system_pods.go:61] "nvidia-device-plugin-daemonset-slbqt" [a926bab4-4e66-4c98-963e-f41f5ea1fa49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0510 16:56:01.696205  731104 system_pods.go:61] "registry-694bd45846-pjmvv" [f99f3f51-b2d2-444c-9172-a281336a69ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0510 16:56:01.696217  731104 system_pods.go:61] "registry-proxy-2hrkl" [f1926b87-dafd-49dc-a845-8f1b075517f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0510 16:56:01.696228  731104 system_pods.go:61] "snapshot-controller-68b874b76f-cxdtz" [1bbae0e1-c191-4e58-aea9-a94542984207] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:01.696239  731104 system_pods.go:61] "snapshot-controller-68b874b76f-qng99" [0c237785-f4a0-4f1c-a33e-1d6d99b09ca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:01.696250  731104 system_pods.go:61] "storage-provisioner" [d533b8b2-edf7-4e05-9fed-4c8c05a23f60] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0510 16:56:01.696259  731104 system_pods.go:74] duration metric: took 3.906115ms to wait for pod list to return data ...
	I0510 16:56:01.696272  731104 default_sa.go:34] waiting for default service account to be created ...
	I0510 16:56:01.756925  731104 default_sa.go:45] found service account: "default"
	I0510 16:56:01.756968  731104 default_sa.go:55] duration metric: took 60.684361ms for default service account to be created ...
	I0510 16:56:01.756981  731104 system_pods.go:116] waiting for k8s-apps to be running ...
	I0510 16:56:01.769236  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:01.769326  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:01.844937  731104 system_pods.go:86] 19 kube-system pods found
	I0510 16:56:01.845057  731104 system_pods.go:89] "amd-gpu-device-plugin-wkh8g" [e739ed11-e98a-4e6d-9105-14c2c5463669] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0510 16:56:01.845086  731104 system_pods.go:89] "coredns-674b8bbfcf-n4msm" [0cb19c4f-40cd-4145-98c3-f1710d609272] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 16:56:01.845134  731104 system_pods.go:89] "csi-hostpath-attacher-0" [a26eced3-d492-41f0-9f43-f163252af7ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0510 16:56:01.845155  731104 system_pods.go:89] "csi-hostpath-resizer-0" [bbb9ed99-10a0-49cf-a4ff-c1ec27a30a5a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0510 16:56:01.845173  731104 system_pods.go:89] "csi-hostpathplugin-cbgm9" [5465e1cc-996f-4ede-a2cf-c3eaaa0b37de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0510 16:56:01.845189  731104 system_pods.go:89] "etcd-addons-088134" [d95aa406-9fc3-4735-80c5-f9f17cde659d] Running
	I0510 16:56:01.845228  731104 system_pods.go:89] "kindnet-9929f" [f012534c-b774-4c7c-8844-d37bddf2b6e4] Running
	I0510 16:56:01.845239  731104 system_pods.go:89] "kube-apiserver-addons-088134" [91981f1a-14b3-4e5a-99e6-9abc8900080e] Running
	I0510 16:56:01.845245  731104 system_pods.go:89] "kube-controller-manager-addons-088134" [417095d9-ac03-4918-bcb6-91996522918b] Running
	I0510 16:56:01.845256  731104 system_pods.go:89] "kube-ingress-dns-minikube" [2f978a66-7d99-44f4-a58a-d0df66466df0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0510 16:56:01.845261  731104 system_pods.go:89] "kube-proxy-rwb2j" [db4b4b5c-2ed3-46a1-82c6-d3c6bc3cbb94] Running
	I0510 16:56:01.845266  731104 system_pods.go:89] "kube-scheduler-addons-088134" [2ef52c7c-9ca2-447b-84be-d60312db1962] Running
	I0510 16:56:01.845278  731104 system_pods.go:89] "metrics-server-7fbb699795-mj6nz" [c09c57d8-2189-467d-ba7e-6e516538365f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 16:56:01.845303  731104 system_pods.go:89] "nvidia-device-plugin-daemonset-slbqt" [a926bab4-4e66-4c98-963e-f41f5ea1fa49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0510 16:56:01.845316  731104 system_pods.go:89] "registry-694bd45846-pjmvv" [f99f3f51-b2d2-444c-9172-a281336a69ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0510 16:56:01.845324  731104 system_pods.go:89] "registry-proxy-2hrkl" [f1926b87-dafd-49dc-a845-8f1b075517f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0510 16:56:01.845354  731104 system_pods.go:89] "snapshot-controller-68b874b76f-cxdtz" [1bbae0e1-c191-4e58-aea9-a94542984207] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:01.845369  731104 system_pods.go:89] "snapshot-controller-68b874b76f-qng99" [0c237785-f4a0-4f1c-a33e-1d6d99b09ca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:01.845380  731104 system_pods.go:89] "storage-provisioner" [d533b8b2-edf7-4e05-9fed-4c8c05a23f60] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0510 16:56:01.845410  731104 retry.go:31] will retry after 273.218795ms: missing components: kube-dns
	I0510 16:56:01.964803  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:02.151901  731104 system_pods.go:86] 19 kube-system pods found
	I0510 16:56:02.151984  731104 system_pods.go:89] "amd-gpu-device-plugin-wkh8g" [e739ed11-e98a-4e6d-9105-14c2c5463669] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0510 16:56:02.151999  731104 system_pods.go:89] "coredns-674b8bbfcf-n4msm" [0cb19c4f-40cd-4145-98c3-f1710d609272] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0510 16:56:02.152080  731104 system_pods.go:89] "csi-hostpath-attacher-0" [a26eced3-d492-41f0-9f43-f163252af7ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0510 16:56:02.152126  731104 system_pods.go:89] "csi-hostpath-resizer-0" [bbb9ed99-10a0-49cf-a4ff-c1ec27a30a5a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0510 16:56:02.152152  731104 system_pods.go:89] "csi-hostpathplugin-cbgm9" [5465e1cc-996f-4ede-a2cf-c3eaaa0b37de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0510 16:56:02.152164  731104 system_pods.go:89] "etcd-addons-088134" [d95aa406-9fc3-4735-80c5-f9f17cde659d] Running
	I0510 16:56:02.152174  731104 system_pods.go:89] "kindnet-9929f" [f012534c-b774-4c7c-8844-d37bddf2b6e4] Running
	I0510 16:56:02.152180  731104 system_pods.go:89] "kube-apiserver-addons-088134" [91981f1a-14b3-4e5a-99e6-9abc8900080e] Running
	I0510 16:56:02.152186  731104 system_pods.go:89] "kube-controller-manager-addons-088134" [417095d9-ac03-4918-bcb6-91996522918b] Running
	I0510 16:56:02.152200  731104 system_pods.go:89] "kube-ingress-dns-minikube" [2f978a66-7d99-44f4-a58a-d0df66466df0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0510 16:56:02.152207  731104 system_pods.go:89] "kube-proxy-rwb2j" [db4b4b5c-2ed3-46a1-82c6-d3c6bc3cbb94] Running
	I0510 16:56:02.152215  731104 system_pods.go:89] "kube-scheduler-addons-088134" [2ef52c7c-9ca2-447b-84be-d60312db1962] Running
	I0510 16:56:02.152229  731104 system_pods.go:89] "metrics-server-7fbb699795-mj6nz" [c09c57d8-2189-467d-ba7e-6e516538365f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 16:56:02.152244  731104 system_pods.go:89] "nvidia-device-plugin-daemonset-slbqt" [a926bab4-4e66-4c98-963e-f41f5ea1fa49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0510 16:56:02.152254  731104 system_pods.go:89] "registry-694bd45846-pjmvv" [f99f3f51-b2d2-444c-9172-a281336a69ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0510 16:56:02.152270  731104 system_pods.go:89] "registry-proxy-2hrkl" [f1926b87-dafd-49dc-a845-8f1b075517f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0510 16:56:02.152284  731104 system_pods.go:89] "snapshot-controller-68b874b76f-cxdtz" [1bbae0e1-c191-4e58-aea9-a94542984207] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:02.152398  731104 system_pods.go:89] "snapshot-controller-68b874b76f-qng99" [0c237785-f4a0-4f1c-a33e-1d6d99b09ca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:02.152701  731104 system_pods.go:89] "storage-provisioner" [d533b8b2-edf7-4e05-9fed-4c8c05a23f60] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0510 16:56:02.152730  731104 retry.go:31] will retry after 326.769279ms: missing components: kube-dns
	I0510 16:56:02.248975  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:02.349933  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:02.350148  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:02.465121  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:02.566964  731104 system_pods.go:86] 19 kube-system pods found
	I0510 16:56:02.567001  731104 system_pods.go:89] "amd-gpu-device-plugin-wkh8g" [e739ed11-e98a-4e6d-9105-14c2c5463669] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0510 16:56:02.567006  731104 system_pods.go:89] "coredns-674b8bbfcf-n4msm" [0cb19c4f-40cd-4145-98c3-f1710d609272] Running
	I0510 16:56:02.567014  731104 system_pods.go:89] "csi-hostpath-attacher-0" [a26eced3-d492-41f0-9f43-f163252af7ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0510 16:56:02.567020  731104 system_pods.go:89] "csi-hostpath-resizer-0" [bbb9ed99-10a0-49cf-a4ff-c1ec27a30a5a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0510 16:56:02.567025  731104 system_pods.go:89] "csi-hostpathplugin-cbgm9" [5465e1cc-996f-4ede-a2cf-c3eaaa0b37de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0510 16:56:02.567030  731104 system_pods.go:89] "etcd-addons-088134" [d95aa406-9fc3-4735-80c5-f9f17cde659d] Running
	I0510 16:56:02.567034  731104 system_pods.go:89] "kindnet-9929f" [f012534c-b774-4c7c-8844-d37bddf2b6e4] Running
	I0510 16:56:02.567037  731104 system_pods.go:89] "kube-apiserver-addons-088134" [91981f1a-14b3-4e5a-99e6-9abc8900080e] Running
	I0510 16:56:02.567042  731104 system_pods.go:89] "kube-controller-manager-addons-088134" [417095d9-ac03-4918-bcb6-91996522918b] Running
	I0510 16:56:02.567047  731104 system_pods.go:89] "kube-ingress-dns-minikube" [2f978a66-7d99-44f4-a58a-d0df66466df0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0510 16:56:02.567050  731104 system_pods.go:89] "kube-proxy-rwb2j" [db4b4b5c-2ed3-46a1-82c6-d3c6bc3cbb94] Running
	I0510 16:56:02.567053  731104 system_pods.go:89] "kube-scheduler-addons-088134" [2ef52c7c-9ca2-447b-84be-d60312db1962] Running
	I0510 16:56:02.567058  731104 system_pods.go:89] "metrics-server-7fbb699795-mj6nz" [c09c57d8-2189-467d-ba7e-6e516538365f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0510 16:56:02.567063  731104 system_pods.go:89] "nvidia-device-plugin-daemonset-slbqt" [a926bab4-4e66-4c98-963e-f41f5ea1fa49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0510 16:56:02.567072  731104 system_pods.go:89] "registry-694bd45846-pjmvv" [f99f3f51-b2d2-444c-9172-a281336a69ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0510 16:56:02.567079  731104 system_pods.go:89] "registry-proxy-2hrkl" [f1926b87-dafd-49dc-a845-8f1b075517f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0510 16:56:02.567087  731104 system_pods.go:89] "snapshot-controller-68b874b76f-cxdtz" [1bbae0e1-c191-4e58-aea9-a94542984207] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:02.567094  731104 system_pods.go:89] "snapshot-controller-68b874b76f-qng99" [0c237785-f4a0-4f1c-a33e-1d6d99b09ca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0510 16:56:02.567098  731104 system_pods.go:89] "storage-provisioner" [d533b8b2-edf7-4e05-9fed-4c8c05a23f60] Running
	I0510 16:56:02.567106  731104 system_pods.go:126] duration metric: took 810.119278ms to wait for k8s-apps to be running ...
	I0510 16:56:02.567116  731104 system_svc.go:44] waiting for kubelet service to be running ....
	I0510 16:56:02.567160  731104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 16:56:02.579185  731104 system_svc.go:56] duration metric: took 12.058834ms WaitForService to wait for kubelet
	I0510 16:56:02.579217  731104 kubeadm.go:578] duration metric: took 45.752605221s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0510 16:56:02.579244  731104 node_conditions.go:102] verifying NodePressure condition ...
	I0510 16:56:02.582180  731104 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0510 16:56:02.582208  731104 node_conditions.go:123] node cpu capacity is 8
	I0510 16:56:02.582228  731104 node_conditions.go:105] duration metric: took 2.977345ms to run NodePressure ...
	I0510 16:56:02.582244  731104 start.go:241] waiting for startup goroutines ...
	I0510 16:56:02.694366  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:02.766751  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:02.769346  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:02.965109  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:03.195366  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:03.266763  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:03.269592  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:03.464264  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:03.694961  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:03.765930  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:03.769117  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:03.964961  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:04.195175  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:04.266330  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:04.269395  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:04.465061  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:04.694032  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:04.765933  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:04.769169  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:04.964383  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:05.194964  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:05.295801  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:05.295845  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:05.464489  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:05.694946  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:05.767019  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:05.769298  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:05.965099  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:06.195273  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:06.266039  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:06.269049  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:06.465423  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:06.694877  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:06.767133  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:06.769150  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:06.964845  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:07.195010  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:07.267820  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:07.269737  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:07.464279  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:07.694287  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:07.766381  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:07.769626  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:07.963775  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:08.195655  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:08.267303  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:08.269249  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:08.464863  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:08.694177  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:08.766332  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:08.769465  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:08.964433  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:09.195055  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:09.266064  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:09.269056  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:09.464413  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:09.694620  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:09.766991  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:09.769191  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:09.964392  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:10.245115  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:10.266515  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:10.269611  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:10.464420  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:10.744977  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:10.767356  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:10.769511  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:10.964169  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:11.194112  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:11.266149  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:11.269375  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:11.464699  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:11.694523  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:11.766829  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:11.769199  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:11.964736  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:12.194905  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:12.266745  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:12.268966  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:12.464259  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:12.694680  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:12.795307  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:12.795307  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:12.964544  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:13.194326  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:13.267186  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:13.269657  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:13.464879  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:13.746420  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:13.767049  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:13.769720  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:13.963963  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:14.244984  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:14.266376  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:14.269969  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:14.464519  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:14.745044  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:14.846274  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:14.846325  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:14.964441  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:15.194487  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:15.266734  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:15.269489  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:15.465241  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:15.694084  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:15.765943  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:15.769020  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:15.964540  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:16.194214  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:16.266512  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:16.269543  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:16.465097  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:16.694576  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:16.766715  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:16.769618  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:16.964410  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:17.194369  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:17.266690  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:17.269416  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:17.464772  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:17.694704  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:17.766652  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:17.769114  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:17.964481  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:18.194495  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:18.266717  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:18.269436  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:18.467059  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:18.694635  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:18.766922  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:18.769004  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:18.964842  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:19.195111  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:19.265921  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:19.268920  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:19.464154  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:19.694588  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:19.766977  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:19.769287  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:19.964728  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:20.245504  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:20.266798  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:20.269809  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:20.464457  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:20.694910  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:20.767152  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:20.769128  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:20.964457  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:21.194929  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:21.265911  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:21.269182  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:21.464733  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:21.695136  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:21.766348  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:21.769494  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:21.963884  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:22.193953  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:22.267385  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:22.269340  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:22.464640  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:22.694787  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:22.795435  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:22.795483  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:22.964640  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:23.194333  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:23.266341  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:23.269682  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:23.463903  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:23.748196  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:23.766087  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:23.769421  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:23.964487  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:24.194829  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:24.296013  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:24.296058  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:24.464049  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:24.694052  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:24.766060  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:24.769169  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:24.964795  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:25.195045  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:25.266152  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:25.269386  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:25.464707  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:25.694687  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:25.766650  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:25.768879  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:25.964098  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:26.193892  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:26.266879  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:26.268869  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:26.464110  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:26.694378  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:26.766407  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:26.769323  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:26.963686  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:27.194365  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:27.266140  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:27.269147  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:27.464289  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:27.694392  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:27.766381  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:27.769443  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:27.964815  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:28.247187  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:28.266077  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:28.269334  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:28.469106  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:28.747677  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:28.767327  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:28.846515  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:28.965034  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:29.259026  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:29.348089  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:29.348394  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:29.463968  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:29.746572  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:29.766569  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:29.770297  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:29.963734  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:30.246347  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:30.266179  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:30.269547  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:30.463796  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:30.695529  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:30.766460  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:30.769574  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:30.963642  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:31.194585  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:31.348253  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:31.348880  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:31.464424  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:31.694218  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:31.765820  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:31.770004  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:31.964307  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:32.194171  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:32.266495  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:32.269819  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:32.464217  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:32.694406  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:32.766422  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:32.769796  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:32.963927  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:33.195522  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:33.266199  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:33.269381  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:33.464649  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:33.695524  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:33.766907  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:33.769410  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:33.964769  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:34.194698  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:34.266513  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:34.269482  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:34.464921  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:34.695156  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:34.766542  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:34.769954  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:34.964356  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:35.196521  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:35.266335  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:35.269453  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:35.464987  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:35.695579  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:35.766056  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:35.769301  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:35.965342  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:36.194343  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:36.266603  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:36.269573  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:36.463834  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:36.695399  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:36.766424  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:36.769675  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:36.964359  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:37.194683  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:37.266533  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:37.269445  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:37.464547  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:37.694166  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:37.766286  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:37.769222  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:37.964782  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:38.194805  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:38.267461  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:38.269163  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:38.464635  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:38.694762  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:38.766823  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:38.769372  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:38.964616  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:39.195043  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:39.295445  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:39.295561  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:39.464668  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:39.695236  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:39.766373  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:39.769643  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:39.964155  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:40.194354  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:40.266770  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:40.269278  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:40.464455  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:40.694554  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:40.766893  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:40.769046  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:40.964446  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:41.194225  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:41.266789  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:41.269319  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:41.464916  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:41.694680  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:41.795736  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:41.795847  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:41.964873  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:42.195010  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:42.266238  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:42.269451  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:42.464774  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:42.695383  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:42.796591  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:42.796707  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:42.963699  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:43.195244  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:43.295642  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0510 16:56:43.295652  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:43.465104  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:43.746724  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:43.844785  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:43.862457  731104 kapi.go:107] duration metric: took 1m20.595827968s to wait for kubernetes.io/minikube-addons=registry ...
	I0510 16:56:43.965692  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:44.247155  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:44.267390  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:44.463942  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:44.747660  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:44.849518  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:44.964125  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:45.246403  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:45.267191  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:45.464525  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:45.694657  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:45.766710  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:45.964558  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:46.194428  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:46.266779  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:46.463562  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:46.711567  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:46.766741  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:46.963852  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:47.194869  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:47.266931  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:47.464060  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:47.693641  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:47.766752  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:47.963862  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:48.194937  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:48.265982  731104 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0510 16:56:48.464654  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:48.694893  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:48.779960  731104 kapi.go:107] duration metric: took 1m25.517093917s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0510 16:56:48.964647  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:49.194899  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:49.464092  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:49.694057  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:49.964566  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:50.194367  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:50.465251  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:50.694520  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:50.965523  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:51.245793  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:51.464577  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:51.694654  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:51.964638  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:52.194399  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:52.464862  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:52.695398  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:52.965487  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:53.194713  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:53.464902  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:53.694531  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:53.964338  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:54.193912  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:54.464472  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:54.694004  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:54.970052  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:55.194435  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:55.465885  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:55.745946  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0510 16:56:55.964458  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:56.195017  731104 kapi.go:107] duration metric: took 1m29.004082213s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0510 16:56:56.197052  731104 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-088134 cluster.
	I0510 16:56:56.199340  731104 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0510 16:56:56.200506  731104 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0510 16:56:56.464448  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:56.964413  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:57.464331  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:57.963995  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:58.464735  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:58.964374  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:59.463633  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:56:59.964413  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:57:00.464019  731104 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0510 16:57:00.965683  731104 kapi.go:107] duration metric: took 1m36.505152543s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0510 16:57:00.967242  731104 out.go:177] * Enabled addons: storage-provisioner, amd-gpu-device-plugin, ingress-dns, default-storageclass, nvidia-device-plugin, cloud-spanner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0510 16:57:00.968556  731104 addons.go:514] duration metric: took 1m44.142027482s for enable addons: enabled=[storage-provisioner amd-gpu-device-plugin ingress-dns default-storageclass nvidia-device-plugin cloud-spanner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0510 16:57:00.968601  731104 start.go:246] waiting for cluster config update ...
	I0510 16:57:00.968642  731104 start.go:255] writing updated cluster config ...
	I0510 16:57:00.968957  731104 ssh_runner.go:195] Run: rm -f paused
	I0510 16:57:00.972751  731104 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 16:57:00.975926  731104 pod_ready.go:83] waiting for pod "coredns-674b8bbfcf-n4msm" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:00.979996  731104 pod_ready.go:94] pod "coredns-674b8bbfcf-n4msm" is "Ready"
	I0510 16:57:00.980019  731104 pod_ready.go:86] duration metric: took 4.069989ms for pod "coredns-674b8bbfcf-n4msm" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:00.982012  731104 pod_ready.go:83] waiting for pod "etcd-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:00.985586  731104 pod_ready.go:94] pod "etcd-addons-088134" is "Ready"
	I0510 16:57:00.985604  731104 pod_ready.go:86] duration metric: took 3.570305ms for pod "etcd-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:00.987493  731104 pod_ready.go:83] waiting for pod "kube-apiserver-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:00.990926  731104 pod_ready.go:94] pod "kube-apiserver-addons-088134" is "Ready"
	I0510 16:57:00.990942  731104 pod_ready.go:86] duration metric: took 3.430544ms for pod "kube-apiserver-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:00.992702  731104 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:01.376499  731104 pod_ready.go:94] pod "kube-controller-manager-addons-088134" is "Ready"
	I0510 16:57:01.376540  731104 pod_ready.go:86] duration metric: took 383.816874ms for pod "kube-controller-manager-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:01.576468  731104 pod_ready.go:83] waiting for pod "kube-proxy-rwb2j" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:01.977103  731104 pod_ready.go:94] pod "kube-proxy-rwb2j" is "Ready"
	I0510 16:57:01.977131  731104 pod_ready.go:86] duration metric: took 400.634309ms for pod "kube-proxy-rwb2j" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:02.177677  731104 pod_ready.go:83] waiting for pod "kube-scheduler-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:02.577246  731104 pod_ready.go:94] pod "kube-scheduler-addons-088134" is "Ready"
	I0510 16:57:02.577278  731104 pod_ready.go:86] duration metric: took 399.57116ms for pod "kube-scheduler-addons-088134" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 16:57:02.577289  731104 pod_ready.go:40] duration metric: took 1.604503102s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 16:57:02.623090  731104 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 16:57:02.625834  731104 out.go:177] * Done! kubectl is now configured to use "addons-088134" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 10 17:01:26 addons-088134 crio[1051]: time="2025-05-10 17:01:26.247542224Z" level=info msg="Image docker.io/nginx:alpine not found" id=82a77349-7ace-41d3-b695-aa39ca4e125f name=/runtime.v1.ImageService/ImageStatus
	May 10 17:01:38 addons-088134 crio[1051]: time="2025-05-10 17:01:38.246767200Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=e6644d1e-e1d1-46b5-92c5-9f87eefb4aa5 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:01:38 addons-088134 crio[1051]: time="2025-05-10 17:01:38.247012141Z" level=info msg="Image docker.io/nginx:alpine not found" id=e6644d1e-e1d1-46b5-92c5-9f87eefb4aa5 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:01:52 addons-088134 crio[1051]: time="2025-05-10 17:01:52.247647619Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=ed14a9e6-9f3d-4627-9f8c-99501676a2f4 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:01:52 addons-088134 crio[1051]: time="2025-05-10 17:01:52.247956915Z" level=info msg="Image docker.io/nginx:alpine not found" id=ed14a9e6-9f3d-4627-9f8c-99501676a2f4 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:01:52 addons-088134 crio[1051]: time="2025-05-10 17:01:52.248573212Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=1b2f3e46-be7a-43f4-8d2f-3c865533d57f name=/runtime.v1.ImageService/PullImage
	May 10 17:01:52 addons-088134 crio[1051]: time="2025-05-10 17:01:52.249824750Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	May 10 17:02:22 addons-088134 crio[1051]: time="2025-05-10 17:02:22.902105066Z" level=info msg="Pulling image: docker.io/nginx:latest" id=5ddd3339-e308-4efa-92ed-fec80b64b645 name=/runtime.v1.ImageService/PullImage
	May 10 17:02:22 addons-088134 crio[1051]: time="2025-05-10 17:02:22.918720258Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	May 10 17:02:35 addons-088134 crio[1051]: time="2025-05-10 17:02:35.246819495Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=09df5654-4986-4420-bd32-adf51b90db5b name=/runtime.v1.ImageService/ImageStatus
	May 10 17:02:35 addons-088134 crio[1051]: time="2025-05-10 17:02:35.247119805Z" level=info msg="Image docker.io/nginx:alpine not found" id=09df5654-4986-4420-bd32-adf51b90db5b name=/runtime.v1.ImageService/ImageStatus
	May 10 17:02:48 addons-088134 crio[1051]: time="2025-05-10 17:02:48.247095912Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b0c3d529-8519-47a0-a79c-1ce535f4ed64 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:02:48 addons-088134 crio[1051]: time="2025-05-10 17:02:48.247320140Z" level=info msg="Image docker.io/nginx:alpine not found" id=b0c3d529-8519-47a0-a79c-1ce535f4ed64 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:03:01 addons-088134 crio[1051]: time="2025-05-10 17:03:01.247997400Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=0041020d-b6fe-46b5-a320-b8661ec01b15 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:03:01 addons-088134 crio[1051]: time="2025-05-10 17:03:01.248240526Z" level=info msg="Image docker.io/nginx:alpine not found" id=0041020d-b6fe-46b5-a320-b8661ec01b15 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:03:14 addons-088134 crio[1051]: time="2025-05-10 17:03:14.246410589Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=11af2272-97e4-45eb-b986-4abe524c3cbf name=/runtime.v1.ImageService/ImageStatus
	May 10 17:03:14 addons-088134 crio[1051]: time="2025-05-10 17:03:14.246655779Z" level=info msg="Image docker.io/nginx:alpine not found" id=11af2272-97e4-45eb-b986-4abe524c3cbf name=/runtime.v1.ImageService/ImageStatus
	May 10 17:03:27 addons-088134 crio[1051]: time="2025-05-10 17:03:27.247183199Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=9959edeb-7807-4d10-baf9-0f052ab9a1f0 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:03:27 addons-088134 crio[1051]: time="2025-05-10 17:03:27.247519464Z" level=info msg="Image docker.io/nginx:alpine not found" id=9959edeb-7807-4d10-baf9-0f052ab9a1f0 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:03:39 addons-088134 crio[1051]: time="2025-05-10 17:03:39.246722330Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=fd17ccd1-5b76-4d8b-92cc-72499f1d8420 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:03:39 addons-088134 crio[1051]: time="2025-05-10 17:03:39.247041228Z" level=info msg="Image docker.io/nginx:alpine not found" id=fd17ccd1-5b76-4d8b-92cc-72499f1d8420 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:03:51 addons-088134 crio[1051]: time="2025-05-10 17:03:51.247498113Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=c164bd11-1d92-4585-9418-605768fe7d08 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:03:51 addons-088134 crio[1051]: time="2025-05-10 17:03:51.247774780Z" level=info msg="Image docker.io/nginx:alpine not found" id=c164bd11-1d92-4585-9418-605768fe7d08 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:03:51 addons-088134 crio[1051]: time="2025-05-10 17:03:51.248310026Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=7f2673a4-f68f-41ab-91f0-6cb3752f88f6 name=/runtime.v1.ImageService/PullImage
	May 10 17:03:51 addons-088134 crio[1051]: time="2025-05-10 17:03:51.286093798Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	0289e5d7dcfc0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   6ecaa28cfa8c9       busybox
	c49297165e565       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   057ccd9a42e9d       csi-hostpathplugin-cbgm9
	e0c73dfe33cec       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   057ccd9a42e9d       csi-hostpathplugin-cbgm9
	b53a09993cebd       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            6 minutes ago       Running             liveness-probe                           0                   057ccd9a42e9d       csi-hostpathplugin-cbgm9
	7e0cd880284af       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           6 minutes ago       Running             hostpath                                 0                   057ccd9a42e9d       csi-hostpathplugin-cbgm9
	98522ad6d99b3       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   057ccd9a42e9d       csi-hostpathplugin-cbgm9
	e16557a7639fa       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b                             7 minutes ago       Running             controller                               0                   3c5807388c814       ingress-nginx-controller-7c9f76cd49-qbgd8
	63ddd40f438c7       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   1f44f62fc2589       snapshot-controller-68b874b76f-cxdtz
	a2dbe07924a47       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   057ccd9a42e9d       csi-hostpathplugin-cbgm9
	7679f6cfb63f6       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   7a2f1abc2c744       csi-hostpath-attacher-0
	52108da465b3c       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             7 minutes ago       Running             minikube-ingress-dns                     0                   a54134a808151       kube-ingress-dns-minikube
	cdde87053fb27       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f                   7 minutes ago       Exited              patch                                    0                   adff6bd9586e7       ingress-nginx-admission-patch-js6jf
	93d4ef6b408a1       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   f7f1cf6ea4798       snapshot-controller-68b874b76f-qng99
	99b3c4effa910       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   28b7fe82d0deb       csi-hostpath-resizer-0
	76eddab5e9c9f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f                   7 minutes ago       Exited              create                                   0                   cbbbf9bcddde9       ingress-nginx-admission-create-f952k
	6f3083ad618b6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             7 minutes ago       Running             storage-provisioner                      0                   d21551c2870ff       storage-provisioner
	e812c145b81cf       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                                                             7 minutes ago       Running             coredns                                  0                   81d12b3b0a2f1       coredns-674b8bbfcf-n4msm
	b70a60379aeea       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f                                                                             8 minutes ago       Running             kindnet-cni                              0                   fbd9b8064a4da       kindnet-9929f
	b9b40eeed72ce       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68                                                                             8 minutes ago       Running             kube-proxy                               0                   d136a9352b030       kube-proxy-rwb2j
	b5770c4e2c673       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4                                                                             8 minutes ago       Running             kube-apiserver                           0                   8c9f3e576d76d       kube-apiserver-addons-088134
	e353710533230       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4                                                                             8 minutes ago       Running             kube-scheduler                           0                   370464a525463       kube-scheduler-addons-088134
	7ea1e306698b2       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02                                                                             8 minutes ago       Running             kube-controller-manager                  0                   08327187c3b18       kube-controller-manager-addons-088134
	1e78801fed908       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                                                             8 minutes ago       Running             etcd                                     0                   2a1843433686f       etcd-addons-088134
	
	
	==> coredns [e812c145b81cf9e9d4792e1c5dfc6a18881e0c38667fed9f37ea51d6155447b6] <==
	[INFO] 10.244.0.19:41430 - 60591 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000162704s
	[INFO] 10.244.0.19:39041 - 27741 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004008951s
	[INFO] 10.244.0.19:39041 - 28058 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005266612s
	[INFO] 10.244.0.19:48593 - 6945 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005297496s
	[INFO] 10.244.0.19:48593 - 7206 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005444124s
	[INFO] 10.244.0.19:46297 - 42903 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004088541s
	[INFO] 10.244.0.19:46297 - 42616 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.007767338s
	[INFO] 10.244.0.19:36799 - 1231 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000124063s
	[INFO] 10.244.0.19:36799 - 1514 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000178124s
	[INFO] 10.244.0.22:38130 - 28750 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000272398s
	[INFO] 10.244.0.22:49886 - 59874 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000377322s
	[INFO] 10.244.0.22:33015 - 1597 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000485937s
	[INFO] 10.244.0.22:43790 - 48974 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123618s
	[INFO] 10.244.0.22:50579 - 38783 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113828s
	[INFO] 10.244.0.22:43801 - 50419 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000397572s
	[INFO] 10.244.0.22:39106 - 57274 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.006031932s
	[INFO] 10.244.0.22:48132 - 36422 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.006927122s
	[INFO] 10.244.0.22:43550 - 23549 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007352277s
	[INFO] 10.244.0.22:51021 - 31565 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007545825s
	[INFO] 10.244.0.22:52551 - 19093 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006624193s
	[INFO] 10.244.0.22:32823 - 55046 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.016421236s
	[INFO] 10.244.0.22:59663 - 14459 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002024561s
	[INFO] 10.244.0.22:58147 - 56118 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002083574s
	[INFO] 10.244.0.25:53567 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000195327s
	[INFO] 10.244.0.25:49086 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157644s
	
	
	==> describe nodes <==
	Name:               addons-088134
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-088134
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=addons-088134
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T16_55_12_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-088134
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-088134"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 16:55:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-088134
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 17:03:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 17:02:50 +0000   Sat, 10 May 2025 16:55:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 17:02:50 +0000   Sat, 10 May 2025 16:55:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 17:02:50 +0000   Sat, 10 May 2025 16:55:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 17:02:50 +0000   Sat, 10 May 2025 16:56:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-088134
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	System Info:
	  Machine ID:                 20500d91395d44e28235d4dd9b851800
	  System UUID:                b82e7783-6ef2-4a0a-9063-340ec333f400
	  Boot ID:                    cf43504f-fb83-4d4b-9ff6-27d975437043
	  Kernel Version:             5.15.0-1081-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  ingress-nginx               ingress-nginx-controller-7c9f76cd49-qbgd8    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         8m28s
	  kube-system                 coredns-674b8bbfcf-n4msm                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m34s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 csi-hostpathplugin-cbgm9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m50s
	  kube-system                 etcd-addons-088134                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m40s
	  kube-system                 kindnet-9929f                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m35s
	  kube-system                 kube-apiserver-addons-088134                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m40s
	  kube-system                 kube-controller-manager-addons-088134        200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m40s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-proxy-rwb2j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 kube-scheduler-addons-088134                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m40s
	  kube-system                 snapshot-controller-68b874b76f-cxdtz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 snapshot-controller-68b874b76f-qng99         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m30s                  kube-proxy       
	  Normal   Starting                 8m46s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m46s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m46s (x8 over 8m46s)  kubelet          Node addons-088134 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m46s (x8 over 8m46s)  kubelet          Node addons-088134 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m46s (x8 over 8m46s)  kubelet          Node addons-088134 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m40s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m40s                  kubelet          Node addons-088134 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m40s                  kubelet          Node addons-088134 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m40s                  kubelet          Node addons-088134 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m35s                  node-controller  Node addons-088134 event: Registered Node addons-088134 in Controller
	  Normal   NodeReady                7m50s                  kubelet          Node addons-088134 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +1.002546] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000006] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.003990] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000004] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000000] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +2.011769] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000002] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000003] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000004] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +4.063544] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000009] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000010] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000006] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.003973] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000005] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +8.191083] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000005] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000000] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000001] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	
	
	==> etcd [1e78801fed90811e8a35c870937504b5146bf46e93829751abb0bd47821c3fde] <==
	{"level":"info","ts":"2025-05-10T16:55:07.071776Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T16:55:07.073027Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T16:55:07.073091Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-05-10T16:55:18.655716Z","caller":"traceutil/trace.go:171","msg":"trace[1793038919] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"109.985081ms","start":"2025-05-10T16:55:18.545691Z","end":"2025-05-10T16:55:18.655677Z","steps":["trace[1793038919] 'process raft request'  (duration: 17.506981ms)","trace[1793038919] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/serviceaccounts/kube-system/disruption-controller; req_size:202; } (duration: 85.2594ms)"],"step_count":2}
	{"level":"info","ts":"2025-05-10T16:55:19.653950Z","caller":"traceutil/trace.go:171","msg":"trace[1974616962] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"292.371235ms","start":"2025-05-10T16:55:19.361554Z","end":"2025-05-10T16:55:19.653925Z","steps":["trace[1974616962] 'process raft request'  (duration: 197.541808ms)","trace[1974616962] 'compare'  (duration: 93.069113ms)"],"step_count":2}
	{"level":"info","ts":"2025-05-10T16:55:19.950972Z","caller":"traceutil/trace.go:171","msg":"trace[1679853814] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"102.173802ms","start":"2025-05-10T16:55:19.848781Z","end":"2025-05-10T16:55:19.950955Z","steps":["trace[1679853814] 'process raft request'  (duration: 102.003177ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T16:55:20.164634Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.737288ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T16:55:20.164832Z","caller":"traceutil/trace.go:171","msg":"trace[999111871] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:428; }","duration":"109.974543ms","start":"2025-05-10T16:55:20.054837Z","end":"2025-05-10T16:55:20.164811Z","steps":["trace[999111871] 'agreement among raft nodes before linearized reading'  (duration: 109.714427ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.169270Z","caller":"traceutil/trace.go:171","msg":"trace[780922946] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"101.070958ms","start":"2025-05-10T16:55:20.068183Z","end":"2025-05-10T16:55:20.169254Z","steps":["trace[780922946] 'process raft request'  (duration: 85.021495ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.463412Z","caller":"traceutil/trace.go:171","msg":"trace[1196043989] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"100.088014ms","start":"2025-05-10T16:55:20.363307Z","end":"2025-05-10T16:55:20.463395Z","steps":["trace[1196043989] 'process raft request'  (duration: 100.010338ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.556806Z","caller":"traceutil/trace.go:171","msg":"trace[99842361] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"192.43927ms","start":"2025-05-10T16:55:20.364343Z","end":"2025-05-10T16:55:20.556782Z","steps":["trace[99842361] 'process raft request'  (duration: 191.773888ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.557005Z","caller":"traceutil/trace.go:171","msg":"trace[2105763833] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"190.924065ms","start":"2025-05-10T16:55:20.366068Z","end":"2025-05-10T16:55:20.556992Z","steps":["trace[2105763833] 'process raft request'  (duration: 190.222065ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.560703Z","caller":"traceutil/trace.go:171","msg":"trace[1451780068] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"192.111428ms","start":"2025-05-10T16:55:20.368577Z","end":"2025-05-10T16:55:20.560689Z","steps":["trace[1451780068] 'process raft request'  (duration: 187.785924ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.560942Z","caller":"traceutil/trace.go:171","msg":"trace[479795440] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"191.72829ms","start":"2025-05-10T16:55:20.368505Z","end":"2025-05-10T16:55:20.560234Z","steps":["trace[479795440] 'process raft request'  (duration: 187.830438ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T16:55:20.765194Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.238789ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-05-10T16:55:20.765361Z","caller":"traceutil/trace.go:171","msg":"trace[754062756] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:459; }","duration":"104.431846ms","start":"2025-05-10T16:55:20.660912Z","end":"2025-05-10T16:55:20.765344Z","steps":["trace[754062756] 'agreement among raft nodes before linearized reading'  (duration: 104.193814ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T16:55:20.766161Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.177977ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-05-10T16:55:20.766204Z","caller":"traceutil/trace.go:171","msg":"trace[563745967] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:459; }","duration":"105.24304ms","start":"2025-05-10T16:55:20.660951Z","end":"2025-05-10T16:55:20.766194Z","steps":["trace[563745967] 'agreement among raft nodes before linearized reading'  (duration: 105.179473ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.766879Z","caller":"traceutil/trace.go:171","msg":"trace[1544329589] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"103.395997ms","start":"2025-05-10T16:55:20.663473Z","end":"2025-05-10T16:55:20.766869Z","steps":["trace[1544329589] 'process raft request'  (duration: 100.915053ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.767214Z","caller":"traceutil/trace.go:171","msg":"trace[716921846] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"101.480542ms","start":"2025-05-10T16:55:20.665724Z","end":"2025-05-10T16:55:20.767205Z","steps":["trace[716921846] 'process raft request'  (duration: 98.846083ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.767462Z","caller":"traceutil/trace.go:171","msg":"trace[1107158292] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"103.812816ms","start":"2025-05-10T16:55:20.663635Z","end":"2025-05-10T16:55:20.767448Z","steps":["trace[1107158292] 'process raft request'  (duration: 100.893861ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.952550Z","caller":"traceutil/trace.go:171","msg":"trace[168546519] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"187.152437ms","start":"2025-05-10T16:55:20.765381Z","end":"2025-05-10T16:55:20.952533Z","steps":["trace[168546519] 'process raft request'  (duration: 179.936197ms)"],"step_count":1}
	{"level":"info","ts":"2025-05-10T16:55:20.953444Z","caller":"traceutil/trace.go:171","msg":"trace[1318291614] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"186.597479ms","start":"2025-05-10T16:55:20.766831Z","end":"2025-05-10T16:55:20.953429Z","steps":["trace[1318291614] 'process raft request'  (duration: 186.204936ms)"],"step_count":1}
	{"level":"warn","ts":"2025-05-10T16:57:20.946308Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.526721ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128037157923451789 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/ipaddresses/10.101.189.46\" mod_revision:0 > success:<request_put:<key:\"/registry/ipaddresses/10.101.189.46\" value_size:540 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-05-10T16:57:20.946431Z","caller":"traceutil/trace.go:171","msg":"trace[1761554193] transaction","detail":"{read_only:false; response_revision:1378; number_of_response:1; }","duration":"176.149156ms","start":"2025-05-10T16:57:20.770256Z","end":"2025-05-10T16:57:20.946405Z","steps":["trace[1761554193] 'process raft request'  (duration: 52.079278ms)","trace[1761554193] 'compare'  (duration: 123.382791ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:03:52 up  2:46,  0 users,  load average: 0.09, 13.30, 58.29
	Linux addons-088134 5.15.0-1081-gcp #90~20.04.1-Ubuntu SMP Fri Apr 4 18:55:17 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [b70a60379aeeacf04436fa8980dcb6ab38da85f87f9f253453027f66640e2581] <==
	I0510 17:01:50.851590       1 main.go:301] handling current node
	I0510 17:02:00.847535       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:02:00.847585       1 main.go:301] handling current node
	I0510 17:02:10.853074       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:02:10.853115       1 main.go:301] handling current node
	I0510 17:02:20.845084       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:02:20.845140       1 main.go:301] handling current node
	I0510 17:02:30.846540       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:02:30.846577       1 main.go:301] handling current node
	I0510 17:02:40.847539       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:02:40.847596       1 main.go:301] handling current node
	I0510 17:02:50.845036       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:02:50.845080       1 main.go:301] handling current node
	I0510 17:03:00.845210       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:03:00.845260       1 main.go:301] handling current node
	I0510 17:03:10.845435       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:03:10.845493       1 main.go:301] handling current node
	I0510 17:03:20.844565       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:03:20.844606       1 main.go:301] handling current node
	I0510 17:03:30.851517       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:03:30.851554       1 main.go:301] handling current node
	I0510 17:03:40.844537       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:03:40.844922       1 main.go:301] handling current node
	I0510 17:03:50.851510       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:03:50.851549       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b5770c4e2c67394cdfbeabd79aa8d3a4ab1a86ae2dc7c65e9a22daa83002e410] <==
	I0510 16:56:22.858515       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0510 16:56:42.748635       1 handler_proxy.go:99] no RequestInfo found in the context
	E0510 16:56:42.748736       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0510 16:56:42.748834       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.233.178:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.233.178:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.233.178:443: connect: connection refused" logger="UnhandledError"
	E0510 16:56:42.750376       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.233.178:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.233.178:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.233.178:443: connect: connection refused" logger="UnhandledError"
	I0510 16:56:42.784025       1 handler.go:288] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0510 16:57:11.381529       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41128: use of closed network connection
	E0510 16:57:11.548784       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41158: use of closed network connection
	I0510 16:57:14.596509       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 16:57:20.971613       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 16:57:20.973921       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.189.46"}
	I0510 16:57:25.855710       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 16:57:32.514968       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 16:57:33.713219       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 16:57:38.166981       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0510 16:57:38.349640       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.38.46"}
	I0510 16:57:38.353544       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 16:57:43.146215       1 handler.go:288] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0510 16:57:43.761653       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	W0510 16:57:44.163987       1 cacher.go:183] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0510 16:57:53.822344       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E0510 16:57:57.592768       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [7ea1e306698b2d650f44b1de9eb0091223e6eb458ffec2c00e8ee7cbd23a65b6] <==
	I0510 16:55:46.586721       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 16:56:06.066401       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E0510 16:56:16.179721       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 16:56:16.594893       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0510 16:57:24.785931       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0510 16:57:42.645007       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0510 16:57:42.828736       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	E0510 16:57:44.165545       1 reflector.go:200] "Failed to watch" err="the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 16:57:45.165456       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0510 16:57:46.195189       1 shared_informer.go:350] "Waiting for caches to sync" controller="resource quota"
	I0510 16:57:46.195226       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 16:57:46.607789       1 shared_informer.go:350] "Waiting for caches to sync" controller="garbage collector"
	I0510 16:57:46.607837       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	E0510 16:57:47.090937       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 16:57:53.242601       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0510 16:57:53.323010       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	E0510 16:58:05.874885       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 16:58:26.933110       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 16:59:00.806206       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 16:59:33.128452       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:00:23.417662       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:01:05.474352       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:01:58.029381       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:02:47.981275       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0510 17:03:37.799323       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [b9b40eeed72ceafe195eff643c7bbf84bcc83631e8193e0fea1cd093852d843b] <==
	I0510 16:55:20.067315       1 server_linux.go:63] "Using iptables proxy"
	I0510 16:55:20.945955       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0510 16:55:20.946147       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 16:55:21.551531       1 server.go:254] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0510 16:55:21.551638       1 server_linux.go:145] "Using iptables Proxier"
	I0510 16:55:21.663057       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 16:55:21.663804       1 server.go:516] "Version info" version="v1.33.0"
	I0510 16:55:21.664560       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 16:55:21.666575       1 config.go:199] "Starting service config controller"
	I0510 16:55:21.666606       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 16:55:21.666641       1 config.go:105] "Starting endpoint slice config controller"
	I0510 16:55:21.666651       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 16:55:21.666666       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 16:55:21.666671       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 16:55:21.667612       1 config.go:329] "Starting node config controller"
	I0510 16:55:21.667623       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 16:55:21.766987       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 16:55:21.767713       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0510 16:55:21.767585       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 16:55:21.767545       1 shared_informer.go:357] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e353710533230f73d1cc7951b4ae81a4668224ac29497acdc584f5eece3db3ae] <==
	E0510 16:55:09.058014       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0510 16:55:09.058016       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0510 16:55:09.058038       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 16:55:09.058112       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0510 16:55:09.058141       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0510 16:55:09.058151       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0510 16:55:09.058197       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0510 16:55:09.058221       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0510 16:55:09.058221       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0510 16:55:09.058320       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 16:55:09.058348       1 reflector.go:200] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0510 16:55:09.058412       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0510 16:55:09.058434       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 16:55:09.058470       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 16:55:09.945178       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0510 16:55:09.945178       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0510 16:55:09.953668       1 reflector.go:200] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0510 16:55:09.991983       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0510 16:55:10.015479       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0510 16:55:10.044109       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0510 16:55:10.086064       1 reflector.go:200] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0510 16:55:10.128629       1 reflector.go:200] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0510 16:55:10.167474       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0510 16:55:10.197982       1 reflector.go:200] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I0510 16:55:11.755232       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	May 10 17:03:01 addons-088134 kubelet[1692]: E0510 17:03:01.454754    1692 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896581454454241,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:03:10 addons-088134 kubelet[1692]: E0510 17:03:10.246664    1692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="58e565b4-a342-41bf-94fe-2b8a1251e1d1"
	May 10 17:03:11 addons-088134 kubelet[1692]: E0510 17:03:11.289686    1692 manager.go:1116] Failed to create existing container: /docker/bde85e095a689bd54666e2402a657000d24a2c7705d3306e0ea57357e438c2b3/crio-833ff395fd4c00114be8e3b230598ba38cc8a71634b5595e02c2c19c70422572: Error finding container 833ff395fd4c00114be8e3b230598ba38cc8a71634b5595e02c2c19c70422572: Status 404 returned error can't find the container with id 833ff395fd4c00114be8e3b230598ba38cc8a71634b5595e02c2c19c70422572
	May 10 17:03:11 addons-088134 kubelet[1692]: E0510 17:03:11.290017    1692 manager.go:1116] Failed to create existing container: /crio-833ff395fd4c00114be8e3b230598ba38cc8a71634b5595e02c2c19c70422572: Error finding container 833ff395fd4c00114be8e3b230598ba38cc8a71634b5595e02c2c19c70422572: Status 404 returned error can't find the container with id 833ff395fd4c00114be8e3b230598ba38cc8a71634b5595e02c2c19c70422572
	May 10 17:03:11 addons-088134 kubelet[1692]: E0510 17:03:11.297406    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9b8d690b6ff696a7ef2ec7f7f5896eb5eef5bcc04995214b5e90eef27d8baec3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9b8d690b6ff696a7ef2ec7f7f5896eb5eef5bcc04995214b5e90eef27d8baec3/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:03:11 addons-088134 kubelet[1692]: E0510 17:03:11.299642    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e474608b4929bc72a83c450d690b89624ab09a095aca127f6bb4730a43490583/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e474608b4929bc72a83c450d690b89624ab09a095aca127f6bb4730a43490583/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:03:11 addons-088134 kubelet[1692]: E0510 17:03:11.358441    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9b8d690b6ff696a7ef2ec7f7f5896eb5eef5bcc04995214b5e90eef27d8baec3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9b8d690b6ff696a7ef2ec7f7f5896eb5eef5bcc04995214b5e90eef27d8baec3/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:03:11 addons-088134 kubelet[1692]: E0510 17:03:11.362724    1692 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e474608b4929bc72a83c450d690b89624ab09a095aca127f6bb4730a43490583/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e474608b4929bc72a83c450d690b89624ab09a095aca127f6bb4730a43490583/diff: no such file or directory, extraDiskErr: <nil>
	May 10 17:03:11 addons-088134 kubelet[1692]: E0510 17:03:11.456733    1692 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896591456470180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:03:11 addons-088134 kubelet[1692]: E0510 17:03:11.456812    1692 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896591456470180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:03:14 addons-088134 kubelet[1692]: E0510 17:03:14.246975    1692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="670a8744-ab16-44a6-a1c9-0a18c96cf593"
	May 10 17:03:21 addons-088134 kubelet[1692]: E0510 17:03:21.459273    1692 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896601459008414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:03:21 addons-088134 kubelet[1692]: E0510 17:03:21.459336    1692 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896601459008414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:03:22 addons-088134 kubelet[1692]: E0510 17:03:22.246435    1692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="58e565b4-a342-41bf-94fe-2b8a1251e1d1"
	May 10 17:03:27 addons-088134 kubelet[1692]: E0510 17:03:27.247799    1692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="670a8744-ab16-44a6-a1c9-0a18c96cf593"
	May 10 17:03:31 addons-088134 kubelet[1692]: E0510 17:03:31.462024    1692 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896611461783513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:03:31 addons-088134 kubelet[1692]: E0510 17:03:31.462066    1692 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896611461783513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:03:34 addons-088134 kubelet[1692]: E0510 17:03:34.247310    1692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="58e565b4-a342-41bf-94fe-2b8a1251e1d1"
	May 10 17:03:39 addons-088134 kubelet[1692]: E0510 17:03:39.247384    1692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="670a8744-ab16-44a6-a1c9-0a18c96cf593"
	May 10 17:03:41 addons-088134 kubelet[1692]: E0510 17:03:41.464823    1692 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896621464612608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:03:41 addons-088134 kubelet[1692]: E0510 17:03:41.464860    1692 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896621464612608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:03:47 addons-088134 kubelet[1692]: E0510 17:03:47.247120    1692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="58e565b4-a342-41bf-94fe-2b8a1251e1d1"
	May 10 17:03:51 addons-088134 kubelet[1692]: I0510 17:03:51.247443    1692 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	May 10 17:03:51 addons-088134 kubelet[1692]: E0510 17:03:51.466644    1692 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896631466361093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:03:51 addons-088134 kubelet[1692]: E0510 17:03:51.466685    1692 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746896631466361093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:544733,},InodesUsed:&UInt64Value{Value:217,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6f3083ad618b6ea2836326198796611c276a0e493b0cc7dabfd052526bce9edc] <==
	W0510 17:03:26.437088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:28.440322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:28.445207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:30.448320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:30.452306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:32.455460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:32.459744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:34.463059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:34.467042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:36.470369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:36.475649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:38.479009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:38.483320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:40.486784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:40.491007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:42.494176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:42.498056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:44.500754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:44.504236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:46.508205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:46.512494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:48.515877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:48.520360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:50.523032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:03:50.526954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-088134 -n addons-088134
helpers_test.go:261: (dbg) Run:  kubectl --context addons-088134 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod ingress-nginx-admission-create-f952k ingress-nginx-admission-patch-js6jf
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-088134 describe pod nginx task-pv-pod ingress-nginx-admission-create-f952k ingress-nginx-admission-patch-js6jf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-088134 describe pod nginx task-pv-pod ingress-nginx-admission-create-f952k ingress-nginx-admission-patch-js6jf: exit status 1 (73.065044ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-088134/192.168.49.2
	Start Time:       Sat, 10 May 2025 16:57:38 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vv759 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vv759:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m14s                 default-scheduler  Successfully assigned default/nginx to addons-088134
	  Warning  Failed     5m42s                 kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m54s                 kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     90s (x4 over 5m42s)   kubelet            Error: ErrImagePull
	  Warning  Failed     90s (x2 over 4m26s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    13s (x11 over 5m41s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     13s (x11 over 5m41s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    1s (x5 over 6m14s)    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-088134/192.168.49.2
	Start Time:       Sat, 10 May 2025 16:57:50 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v9qc6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-v9qc6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m2s                   default-scheduler  Successfully assigned default/task-pv-pod to addons-088134
	  Warning  Failed     2m24s (x2 over 3m55s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    92s (x4 over 6m2s)     kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     57s (x2 over 4m57s)    kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     57s (x4 over 4m57s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    5s (x9 over 4m56s)     kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     5s (x9 over 4m56s)     kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-f952k" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-js6jf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-088134 describe pod nginx task-pv-pod ingress-nginx-admission-create-f952k ingress-nginx-admission-patch-js6jf: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-088134 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.613126993s)
--- FAIL: TestAddons/parallel/CSI (388.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-914764 --alsologtostderr -v=1]
functional_test.go:935: output didn't produce a URL
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-914764 --alsologtostderr -v=1] ...
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-914764 --alsologtostderr -v=1] stdout:
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-914764 --alsologtostderr -v=1] stderr:
I0510 17:09:15.928954  769136 out.go:345] Setting OutFile to fd 1 ...
I0510 17:09:15.929784  769136 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:09:15.929798  769136 out.go:358] Setting ErrFile to fd 2...
I0510 17:09:15.929803  769136 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:09:15.930007  769136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
I0510 17:09:15.930289  769136 mustload.go:65] Loading cluster: functional-914764
I0510 17:09:15.930655  769136 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 17:09:15.931015  769136 cli_runner.go:164] Run: docker container inspect functional-914764 --format={{.State.Status}}
I0510 17:09:15.948106  769136 host.go:66] Checking if "functional-914764" exists ...
I0510 17:09:15.948384  769136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0510 17:09:15.997584  769136 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-05-10 17:09:15.987758789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0510 17:09:15.997746  769136 api_server.go:166] Checking apiserver status ...
I0510 17:09:15.997811  769136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0510 17:09:15.997858  769136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-914764
I0510 17:09:16.017559  769136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/functional-914764/id_rsa Username:docker}
I0510 17:09:16.111189  769136 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5404/cgroup
I0510 17:09:16.120458  769136 api_server.go:182] apiserver freezer: "11:freezer:/docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio/crio-6471b9d7617bf5433e4f0daaece1f4915cb330eb85fdf4e0cb3c343d71412587"
I0510 17:09:16.120533  769136 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio/crio-6471b9d7617bf5433e4f0daaece1f4915cb330eb85fdf4e0cb3c343d71412587/freezer.state
I0510 17:09:16.129308  769136 api_server.go:204] freezer state: "THAWED"
I0510 17:09:16.129337  769136 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0510 17:09:16.133502  769136 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0510 17:09:16.133554  769136 out.go:270] * Enabling dashboard ...
* Enabling dashboard ...
I0510 17:09:16.133700  769136 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 17:09:16.133714  769136 addons.go:69] Setting dashboard=true in profile "functional-914764"
I0510 17:09:16.133724  769136 addons.go:238] Setting addon dashboard=true in "functional-914764"
I0510 17:09:16.133749  769136 host.go:66] Checking if "functional-914764" exists ...
I0510 17:09:16.134048  769136 cli_runner.go:164] Run: docker container inspect functional-914764 --format={{.State.Status}}
I0510 17:09:16.154265  769136 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0510 17:09:16.155671  769136 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0510 17:09:16.157107  769136 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0510 17:09:16.157126  769136 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0510 17:09:16.157192  769136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-914764
I0510 17:09:16.174228  769136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/functional-914764/id_rsa Username:docker}
I0510 17:09:16.278031  769136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0510 17:09:16.278058  769136 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0510 17:09:16.295841  769136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0510 17:09:16.295868  769136 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0510 17:09:16.315663  769136 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0510 17:09:16.315692  769136 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0510 17:09:16.334330  769136 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0510 17:09:16.334353  769136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0510 17:09:16.355803  769136 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0510 17:09:16.355833  769136 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0510 17:09:16.373322  769136 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0510 17:09:16.373353  769136 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0510 17:09:16.392361  769136 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0510 17:09:16.392394  769136 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0510 17:09:16.411730  769136 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0510 17:09:16.411762  769136 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0510 17:09:16.430646  769136 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0510 17:09:16.430690  769136 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0510 17:09:16.449067  769136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0510 17:09:17.167386  769136 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-914764 addons enable metrics-server

                                                
                                                
I0510 17:09:17.168712  769136 addons.go:201] Writing out "functional-914764" config to set dashboard=true...
W0510 17:09:17.168961  769136 out.go:270] * Verifying dashboard health ...
* Verifying dashboard health ...
I0510 17:09:17.169646  769136 kapi.go:59] client config for functional-914764: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt", KeyFile:"/home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.key", CAFile:"/home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24b3a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0510 17:09:17.170115  769136 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0510 17:09:17.170134  769136 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0510 17:09:17.170139  769136 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0510 17:09:17.170145  769136 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0510 17:09:17.177290  769136 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  78766fe8-6911-4ef8-a269-44bac50b8cf6 797 0 2025-05-10 17:09:17 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-05-10 17:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.98.6.195,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.98.6.195],IPFamilies:[IPv4],AllocateLoadBalancerNod
ePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0510 17:09:17.177425  769136 out.go:270] * Launching proxy ...
* Launching proxy ...
I0510 17:09:17.177490  769136 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-914764 proxy --port 36195]
I0510 17:09:17.177718  769136 dashboard.go:157] Waiting for kubectl to output host:port ...
I0510 17:09:17.225777  769136 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0510 17:09:17.225879  769136 out.go:270] * Verifying proxy health ...
* Verifying proxy health ...
I0510 17:09:17.235695  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[63cdc1cf-79ac-415d-b7d4-2887b3f0db9e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc0009c9100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000535b80 TLS:<nil>}
I0510 17:09:17.235814  769136 retry.go:31] will retry after 58.841µs: Temporary Error: unexpected response code: 503
I0510 17:09:17.239439  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[66624263-7683-4f6a-8259-652c46a503b7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc001708e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007a28c0 TLS:<nil>}
I0510 17:09:17.239499  769136 retry.go:31] will retry after 177.816µs: Temporary Error: unexpected response code: 503
I0510 17:09:17.243048  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[92b45a3c-f25f-40f4-91a7-0cbd54b14d46] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc0007c0c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000951e00 TLS:<nil>}
I0510 17:09:17.243099  769136 retry.go:31] will retry after 318.852µs: Temporary Error: unexpected response code: 503
I0510 17:09:17.246660  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5a28fc7f-0354-4772-84d3-30120753fedf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc001708fc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000535e00 TLS:<nil>}
I0510 17:09:17.246722  769136 retry.go:31] will retry after 483.197µs: Temporary Error: unexpected response code: 503
I0510 17:09:17.250046  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6e208eb0-5363-4321-992c-ecb45b3e2dfa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc0007c0f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173e000 TLS:<nil>}
I0510 17:09:17.250106  769136 retry.go:31] will retry after 661.836µs: Temporary Error: unexpected response code: 503
I0510 17:09:17.253518  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[63d923cf-b79a-4369-97de-947f19687381] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc0009c9280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001564000 TLS:<nil>}
I0510 17:09:17.253583  769136 retry.go:31] will retry after 795.812µs: Temporary Error: unexpected response code: 503
I0510 17:09:17.257322  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5d9d38a0-8f94-465c-b2ee-6271f875a1bc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc0007c1000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007a2a00 TLS:<nil>}
I0510 17:09:17.257380  769136 retry.go:31] will retry after 1.456703ms: Temporary Error: unexpected response code: 503
I0510 17:09:17.261428  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eb07abdf-39f8-4945-8e6a-e7d3a33d3922] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc0007c1100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001564140 TLS:<nil>}
I0510 17:09:17.261488  769136 retry.go:31] will retry after 1.353934ms: Temporary Error: unexpected response code: 503
I0510 17:09:17.265703  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[705b0e79-80cd-4446-8e78-89f25ec3df48] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc0017090c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001564280 TLS:<nil>}
I0510 17:09:17.265743  769136 retry.go:31] will retry after 2.214105ms: Temporary Error: unexpected response code: 503
I0510 17:09:17.271212  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[57225ec7-a869-4bdd-9bc5-7e4ff392bea8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc001709180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173e140 TLS:<nil>}
I0510 17:09:17.271271  769136 retry.go:31] will retry after 2.840486ms: Temporary Error: unexpected response code: 503
I0510 17:09:17.276712  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f579bda5-9400-405d-88c5-70231e6e1f01] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc0009c9400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173e280 TLS:<nil>}
I0510 17:09:17.276770  769136 retry.go:31] will retry after 5.053291ms: Temporary Error: unexpected response code: 503
I0510 17:09:17.284387  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f9b27924-9899-4309-881e-1cf4968d4318] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc001709280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007a2b40 TLS:<nil>}
I0510 17:09:17.284453  769136 retry.go:31] will retry after 11.681348ms: Temporary Error: unexpected response code: 503
I0510 17:09:17.299896  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0fe96f95-4332-4e55-a9df-8969eee0b454] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc0009c9500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173e3c0 TLS:<nil>}
I0510 17:09:17.299965  769136 retry.go:31] will retry after 17.090525ms: Temporary Error: unexpected response code: 503
I0510 17:09:17.320059  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[41296b2b-0cc5-41f4-bc00-8ffabc72e22f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc001709380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007a2c80 TLS:<nil>}
I0510 17:09:17.320141  769136 retry.go:31] will retry after 24.374976ms: Temporary Error: unexpected response code: 503
I0510 17:09:17.348415  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[462af7bc-0146-4197-b424-0567966979dc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc0009c9640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173e500 TLS:<nil>}
I0510 17:09:17.348492  769136 retry.go:31] will retry after 19.012081ms: Temporary Error: unexpected response code: 503
I0510 17:09:17.370594  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a45710f9-043b-4086-81c2-67b4edd02243] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc0007c1280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007a2dc0 TLS:<nil>}
I0510 17:09:17.370676  769136 retry.go:31] will retry after 63.676578ms: Temporary Error: unexpected response code: 503
I0510 17:09:17.437942  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[36e71c4e-4f30-4f26-8a1d-8bd07f191a46] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc001709480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015643c0 TLS:<nil>}
I0510 17:09:17.438021  769136 retry.go:31] will retry after 46.234394ms: Temporary Error: unexpected response code: 503
I0510 17:09:17.488019  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[978d665b-31c7-48f4-871a-b18a320a75ae] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc001709500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173e640 TLS:<nil>}
I0510 17:09:17.488084  769136 retry.go:31] will retry after 92.216376ms: Temporary Error: unexpected response code: 503
I0510 17:09:17.584650  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a485c89b-d501-4060-8353-fa6795f1a5d4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc0009c9980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173e780 TLS:<nil>}
I0510 17:09:17.584728  769136 retry.go:31] will retry after 191.699935ms: Temporary Error: unexpected response code: 503
I0510 17:09:17.779929  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[02ec5c58-156f-44b7-a8b9-d064b22b0555] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc001709580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007a2f00 TLS:<nil>}
I0510 17:09:17.779986  769136 retry.go:31] will retry after 198.161421ms: Temporary Error: unexpected response code: 503
I0510 17:09:17.981099  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[30284d49-59ec-4354-9bf8-7eb28f452c2b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:17 GMT]] Body:0xc0009c9b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173e8c0 TLS:<nil>}
I0510 17:09:17.981177  769136 retry.go:31] will retry after 335.861434ms: Temporary Error: unexpected response code: 503
I0510 17:09:18.320537  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0ae59cfd-903a-45c7-bbb8-fa44298c9831] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:18 GMT]] Body:0xc0009c9bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007a3040 TLS:<nil>}
I0510 17:09:18.320614  769136 retry.go:31] will retry after 655.710066ms: Temporary Error: unexpected response code: 503
I0510 17:09:18.979871  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bfd99077-fd2a-44bf-b3a3-7d16e8e86043] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:18 GMT]] Body:0xc0007c14c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007a3180 TLS:<nil>}
I0510 17:09:18.979942  769136 retry.go:31] will retry after 441.684703ms: Temporary Error: unexpected response code: 503
I0510 17:09:19.425452  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c61e6eaa-1e99-47de-aa57-7c02d39d306e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:19 GMT]] Body:0xc001709700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001564500 TLS:<nil>}
I0510 17:09:19.425530  769136 retry.go:31] will retry after 1.058460912s: Temporary Error: unexpected response code: 503
I0510 17:09:20.487569  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a744e889-67a3-4697-bd5b-e1ab2e3c08a7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:20 GMT]] Body:0xc0007c15c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173ea00 TLS:<nil>}
I0510 17:09:20.487636  769136 retry.go:31] will retry after 1.607489209s: Temporary Error: unexpected response code: 503
I0510 17:09:22.099381  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3e03728e-ac3d-4764-886c-c37ffcfe4a9c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:22 GMT]] Body:0xc0009c9cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173eb40 TLS:<nil>}
I0510 17:09:22.099478  769136 retry.go:31] will retry after 1.329155478s: Temporary Error: unexpected response code: 503
I0510 17:09:23.432940  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0ed6c52e-941f-4a24-be4b-9c9d9c7a1bf6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:23 GMT]] Body:0xc0007c1640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007a32c0 TLS:<nil>}
I0510 17:09:23.433011  769136 retry.go:31] will retry after 2.551869989s: Temporary Error: unexpected response code: 503
I0510 17:09:25.988734  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e262f2bc-1592-4cbc-ba69-bda9f4066c05] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:25 GMT]] Body:0xc0009c9dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173ec80 TLS:<nil>}
I0510 17:09:25.988798  769136 retry.go:31] will retry after 6.213851592s: Temporary Error: unexpected response code: 503
I0510 17:09:32.206414  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d8b1e0d9-5667-43e9-8e2a-bb0beba9c199] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:32 GMT]] Body:0xc0017098c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007a3400 TLS:<nil>}
I0510 17:09:32.206490  769136 retry.go:31] will retry after 5.196793413s: Temporary Error: unexpected response code: 503
I0510 17:09:37.406794  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dff937bd-2846-4ec4-b276-5fff43609d41] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:37 GMT]] Body:0xc001709940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007a3540 TLS:<nil>}
I0510 17:09:37.406857  769136 retry.go:31] will retry after 7.525880084s: Temporary Error: unexpected response code: 503
I0510 17:09:44.938686  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3bb2eaa4-5d02-4382-aaad-5b57fa42e462] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:44 GMT]] Body:0xc0017c4000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173edc0 TLS:<nil>}
I0510 17:09:44.938777  769136 retry.go:31] will retry after 9.642558561s: Temporary Error: unexpected response code: 503
I0510 17:09:54.586413  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a343b4c6-c4f5-4bd0-b3a4-e78b89575ad6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:09:54 GMT]] Body:0xc0017c4080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173ef00 TLS:<nil>}
I0510 17:09:54.586498  769136 retry.go:31] will retry after 23.499052707s: Temporary Error: unexpected response code: 503
I0510 17:10:18.089784  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cb40634c-94df-49d1-bb9e-0140649a22a7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:10:18 GMT]] Body:0xc0007c1700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173f040 TLS:<nil>}
I0510 17:10:18.089857  769136 retry.go:31] will retry after 41.060387335s: Temporary Error: unexpected response code: 503
I0510 17:10:59.154329  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bb302584-b801-4d6a-9de3-b087dceb08ac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:10:59 GMT]] Body:0xc0017c4180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007a3680 TLS:<nil>}
I0510 17:10:59.154469  769136 retry.go:31] will retry after 1m0.809169142s: Temporary Error: unexpected response code: 503
I0510 17:11:59.968992  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[62beb15d-f22b-4406-9672-153f9092dca3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:11:59 GMT]] Body:0xc00057ca00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173f180 TLS:<nil>}
I0510 17:11:59.969098  769136 retry.go:31] will retry after 1m17.240427035s: Temporary Error: unexpected response code: 503
I0510 17:13:17.213176  769136 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[574692a7-fb0c-488f-912b-258a50effb46] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 10 May 2025 17:13:17 GMT]] Body:0xc00057ca00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00173f2c0 TLS:<nil>}
I0510 17:13:17.213266  769136 retry.go:31] will retry after 1m0.236025181s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-914764
helpers_test.go:235: (dbg) docker inspect functional-914764:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee",
	        "Created": "2025-05-10T17:06:49.422708893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 754053,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-05-10T17:06:49.453322641Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e9e814e304601d171cd7a05fe946703c6fbd63c3e77415c5bcfe31c3cddbbe5f",
	        "ResolvConfPath": "/var/lib/docker/containers/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/hostname",
	        "HostsPath": "/var/lib/docker/containers/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/hosts",
	        "LogPath": "/var/lib/docker/containers/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee-json.log",
	        "Name": "/functional-914764",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-914764:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-914764",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee",
	                "LowerDir": "/var/lib/docker/overlay2/a65c263392192e974528fcb4ab1d1977dd7dbb93115efe2211b9afab4e57d5bf-init/diff:/var/lib/docker/overlay2/d562a19931b28d74981554e3e67ffc7804c8c483ec96f024e40ef2be1bf23f73/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a65c263392192e974528fcb4ab1d1977dd7dbb93115efe2211b9afab4e57d5bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a65c263392192e974528fcb4ab1d1977dd7dbb93115efe2211b9afab4e57d5bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a65c263392192e974528fcb4ab1d1977dd7dbb93115efe2211b9afab4e57d5bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-914764",
	                "Source": "/var/lib/docker/volumes/functional-914764/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-914764",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-914764",
	                "name.minikube.sigs.k8s.io": "functional-914764",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7e2d38dbb98458e30a706f6d63dc138fab1cec70f2a44b374b988cafd346778a",
	            "SandboxKey": "/var/run/docker/netns/7e2d38dbb984",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-914764": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:f9:fd:52:52:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d0ace7f37eb64a81ed1332813173719935e5f7095abb26b25ddc6868822634c8",
	                    "EndpointID": "eaa24ac01cabdc03b95e86296753da9c998d67e4511d7d1bf8452d31f81aba08",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-914764",
	                        "64f37bce315f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-914764 -n functional-914764
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-914764 logs -n 25: (1.397550063s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                  Args                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service        | functional-914764 service              | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | hello-node --url                       |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh findmnt          | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | -T /mount2                             |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                     | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC |                     |
	|                | -p functional-914764                   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                 |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh findmnt          | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | -T /mount3                             |                   |         |         |                     |                     |
	| mount          | -p functional-914764                   | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC |                     |
	|                | --kill=true                            |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh sudo cat         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | /etc/ssl/certs/729815.pem              |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh sudo cat         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | /usr/share/ca-certificates/729815.pem  |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh sudo cat         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | /etc/ssl/certs/51391683.0              |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh sudo cat         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | /etc/ssl/certs/7298152.pem             |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh sudo cat         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | /usr/share/ca-certificates/7298152.pem |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh sudo cat         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0              |                   |         |         |                     |                     |
	| addons         | functional-914764 addons list          | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	| addons         | functional-914764 addons list          | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | -o json                                |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh sudo cat         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | /etc/test/nested/copy/729815/hosts     |                   |         |         |                     |                     |
	| service        | functional-914764 service              | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | hello-node-connect --url               |                   |         |         |                     |                     |
	| image          | functional-914764                      | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | image ls --format short                |                   |         |         |                     |                     |
	|                | --alsologtostderr                      |                   |         |         |                     |                     |
	| image          | functional-914764                      | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | image ls --format json                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                      |                   |         |         |                     |                     |
	| image          | functional-914764                      | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | image ls --format table                |                   |         |         |                     |                     |
	|                | --alsologtostderr                      |                   |         |         |                     |                     |
	| image          | functional-914764                      | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | image ls --format yaml                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                      |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh pgrep            | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC |                     |
	|                | buildkitd                              |                   |         |         |                     |                     |
	| image          | functional-914764 image build -t       | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | localhost/my-image:functional-914764   |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr       |                   |         |         |                     |                     |
	| image          | functional-914764 image ls             | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	| update-context | functional-914764                      | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | update-context                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                 |                   |         |         |                     |                     |
	| update-context | functional-914764                      | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | update-context                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                 |                   |         |         |                     |                     |
	| update-context | functional-914764                      | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | update-context                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                 |                   |         |         |                     |                     |
	|----------------|----------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:09:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:09:07.036160  765023 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:09:07.036269  765023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:09:07.036279  765023 out.go:358] Setting ErrFile to fd 2...
	I0510 17:09:07.036293  765023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:09:07.036609  765023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:09:07.037215  765023 out.go:352] Setting JSON to false
	I0510 17:09:07.038373  765023 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10294,"bootTime":1746886653,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:09:07.038452  765023 start.go:140] virtualization: kvm guest
	I0510 17:09:07.040824  765023 out.go:177] * [functional-914764] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:09:07.042266  765023 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:09:07.042259  765023 notify.go:220] Checking for updates...
	I0510 17:09:07.044180  765023 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:09:07.045712  765023 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:09:07.047257  765023 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 17:09:07.048582  765023 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:09:07.061178  765023 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:09:06.994925  764930 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:09:06.995691  764930 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:09:07.020812  764930 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 17:09:07.020967  764930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:09:07.093709  764930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-05-10 17:09:07.082073403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:09:07.093867  764930 docker.go:318] overlay module found
	I0510 17:09:07.096931  764930 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0510 17:09:07.098502  764930 start.go:304] selected driver: docker
	I0510 17:09:07.098523  764930 start.go:908] validating driver "docker" against &{Name:functional-914764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-914764 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:09:07.098633  764930 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:09:07.101636  764930 out.go:201] 
	W0510 17:09:07.103269  764930 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0510 17:09:07.104409  764930 out.go:201] 
	I0510 17:09:07.064612  765023 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:09:07.065328  765023 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:09:07.094773  765023 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 17:09:07.094882  765023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:09:07.193650  765023 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-05-10 17:09:07.175472465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:09:07.193762  765023 docker.go:318] overlay module found
	I0510 17:09:07.196248  765023 out.go:177] * Using the docker driver based on existing profile
	I0510 17:09:07.197931  765023 start.go:304] selected driver: docker
	I0510 17:09:07.197953  765023 start.go:908] validating driver "docker" against &{Name:functional-914764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-914764 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:09:07.198064  765023 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:09:07.198176  765023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:09:07.269866  765023 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-05-10 17:09:07.252364205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:09:07.270735  765023 cni.go:84] Creating CNI manager for ""
	I0510 17:09:07.270821  765023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 17:09:07.270900  765023 start.go:347] cluster config:
	{Name:functional-914764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-914764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:09:07.273942  765023 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	May 10 17:12:16 functional-914764 crio[4926]: time="2025-05-10 17:12:16.313681937Z" level=info msg="Adding pod default_hello-node-connect-58f9cf68d8-qpwnx to CNI network \"kindnet\" (type=ptp)"
	May 10 17:12:16 functional-914764 crio[4926]: time="2025-05-10 17:12:16.322594231Z" level=info msg="Got pod network &{Name:hello-node-connect-58f9cf68d8-qpwnx Namespace:default ID:5de56d5e621b65218d1286f7764152aba3a8a52c546fa18c2ec212ed69f4aa11 UID:a4043209-9f40-4d30-a859-082ebbb6ca57 NetNS:/var/run/netns/072817a7-ddd8-4a94-be40-73e28f48e93e Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 10 17:12:16 functional-914764 crio[4926]: time="2025-05-10 17:12:16.322756211Z" level=info msg="Checking pod default_hello-node-connect-58f9cf68d8-qpwnx for CNI network kindnet (type=ptp)"
	May 10 17:12:16 functional-914764 crio[4926]: time="2025-05-10 17:12:16.325860587Z" level=info msg="Ran pod sandbox 5de56d5e621b65218d1286f7764152aba3a8a52c546fa18c2ec212ed69f4aa11 with infra container: default/hello-node-connect-58f9cf68d8-qpwnx/POD" id=df92c349-84d4-45da-9af9-3191dea05b85 name=/runtime.v1.RuntimeService/RunPodSandbox
	May 10 17:12:16 functional-914764 crio[4926]: time="2025-05-10 17:12:16.327034099Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.8" id=ed459976-63d6-451c-8d11-333c2f4817eb name=/runtime.v1.ImageService/ImageStatus
	May 10 17:12:16 functional-914764 crio[4926]: time="2025-05-10 17:12:16.327234477Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,RepoTags:[registry.k8s.io/echoserver:1.8],RepoDigests:[registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969],Size_:97846543,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ed459976-63d6-451c-8d11-333c2f4817eb name=/runtime.v1.ImageService/ImageStatus
	May 10 17:12:16 functional-914764 crio[4926]: time="2025-05-10 17:12:16.328007651Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.8" id=ec4c0a13-86bf-4449-b776-397374ebd259 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:12:16 functional-914764 crio[4926]: time="2025-05-10 17:12:16.328182103Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,RepoTags:[registry.k8s.io/echoserver:1.8],RepoDigests:[registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969],Size_:97846543,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ec4c0a13-86bf-4449-b776-397374ebd259 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:12:16 functional-914764 crio[4926]: time="2025-05-10 17:12:16.330856564Z" level=info msg="Creating container: default/hello-node-connect-58f9cf68d8-qpwnx/echoserver" id=48ca536d-1dc3-4f4c-9819-a9545eefabc6 name=/runtime.v1.RuntimeService/CreateContainer
	May 10 17:12:16 functional-914764 crio[4926]: time="2025-05-10 17:12:16.330944341Z" level=warning msg="Allowed annotations are specified for workload []"
	May 10 17:12:16 functional-914764 crio[4926]: time="2025-05-10 17:12:16.375174182Z" level=info msg="Created container 67bc1ef3aad71aadf2fb9e4ab35cc683d0c082b10bd38a998fe7f622f2f16307: default/hello-node-connect-58f9cf68d8-qpwnx/echoserver" id=48ca536d-1dc3-4f4c-9819-a9545eefabc6 name=/runtime.v1.RuntimeService/CreateContainer
	May 10 17:12:16 functional-914764 crio[4926]: time="2025-05-10 17:12:16.375793559Z" level=info msg="Starting container: 67bc1ef3aad71aadf2fb9e4ab35cc683d0c082b10bd38a998fe7f622f2f16307" id=85dd9912-d6dd-4112-959e-c3ac64c0c2b5 name=/runtime.v1.RuntimeService/StartContainer
	May 10 17:12:16 functional-914764 crio[4926]: time="2025-05-10 17:12:16.381539270Z" level=info msg="Started container" PID=8518 containerID=67bc1ef3aad71aadf2fb9e4ab35cc683d0c082b10bd38a998fe7f622f2f16307 description=default/hello-node-connect-58f9cf68d8-qpwnx/echoserver id=85dd9912-d6dd-4112-959e-c3ac64c0c2b5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5de56d5e621b65218d1286f7764152aba3a8a52c546fa18c2ec212ed69f4aa11
	May 10 17:13:02 functional-914764 crio[4926]: time="2025-05-10 17:13:02.316412523Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=b0f6095d-ee6a-46d1-aad4-ae4ee4df1655 name=/runtime.v1.ImageService/PullImage
	May 10 17:13:02 functional-914764 crio[4926]: time="2025-05-10 17:13:02.320148650Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	May 10 17:13:32 functional-914764 crio[4926]: time="2025-05-10 17:13:32.953738420Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=87303ff9-5b8b-4b95-aa86-c194d7304c25 name=/runtime.v1.ImageService/PullImage
	May 10 17:13:32 functional-914764 crio[4926]: time="2025-05-10 17:13:32.958158107Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 17:13:47 functional-914764 crio[4926]: time="2025-05-10 17:13:47.676231586Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=a375b4f0-87d7-4f3c-9bfd-266f93242b99 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:13:47 functional-914764 crio[4926]: time="2025-05-10 17:13:47.676474574Z" level=info msg="Image docker.io/nginx:alpine not found" id=a375b4f0-87d7-4f3c-9bfd-266f93242b99 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:13:58 functional-914764 crio[4926]: time="2025-05-10 17:13:58.675953907Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=dfdfd536-d672-405d-9ad5-ca5c225f1387 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:13:58 functional-914764 crio[4926]: time="2025-05-10 17:13:58.676272674Z" level=info msg="Image docker.io/nginx:alpine not found" id=dfdfd536-d672-405d-9ad5-ca5c225f1387 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:14:03 functional-914764 crio[4926]: time="2025-05-10 17:14:03.569074978Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=cbcfdb16-3c3a-458a-b4e4-9e3db60650f0 name=/runtime.v1.ImageService/PullImage
	May 10 17:14:03 functional-914764 crio[4926]: time="2025-05-10 17:14:03.586507857Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	May 10 17:14:14 functional-914764 crio[4926]: time="2025-05-10 17:14:14.675554758Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=878a8628-7a48-40d4-8d7a-2354a0623222 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:14:14 functional-914764 crio[4926]: time="2025-05-10 17:14:14.675873818Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=878a8628-7a48-40d4-8d7a-2354a0623222 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	67bc1ef3aad71       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                      2 minutes ago       Running             echoserver                0                   5de56d5e621b6       hello-node-connect-58f9cf68d8-qpwnx
	92a1bf27d6bc4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   49250225d3fc2       busybox-mount
	275c4bd8a45f7       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969    5 minutes ago       Running             echoserver                0                   8f4cdec4dfd2e       hello-node-fcfd88b6f-2w246
	adf6f74785797       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                      5 minutes ago       Running             coredns                   2                   c43d8cda87453       coredns-674b8bbfcf-p47zm
	b68f5758d229e       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f                                      5 minutes ago       Running             kindnet-cni               2                   3f4bbb3dd1065       kindnet-zqd22
	b7b96114d0832       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68                                      5 minutes ago       Running             kube-proxy                2                   b6887f1bec22c       kube-proxy-ss4s9
	cdd649f2b2a77       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       3                   ec13ac1e0e4da       storage-provisioner
	6471b9d7617bf       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4                                      5 minutes ago       Running             kube-apiserver            0                   66fd7bd7785a2       kube-apiserver-functional-914764
	636c2672ef1fe       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02                                      5 minutes ago       Running             kube-controller-manager   2                   de7832d34774b       kube-controller-manager-functional-914764
	b8aac10c549f9       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                      5 minutes ago       Running             etcd                      2                   9c2c1b285f992       etcd-functional-914764
	141001f7a575a       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4                                      5 minutes ago       Running             kube-scheduler            2                   e667509803051       kube-scheduler-functional-914764
	b49024e3234d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Exited              storage-provisioner       2                   ec13ac1e0e4da       storage-provisioner
	492f1e9244ec1       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4                                      6 minutes ago       Exited              kube-scheduler            1                   e667509803051       kube-scheduler-functional-914764
	36b4d81bf219c       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                      6 minutes ago       Exited              etcd                      1                   9c2c1b285f992       etcd-functional-914764
	3bfba7be3e9f9       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02                                      6 minutes ago       Exited              kube-controller-manager   1                   de7832d34774b       kube-controller-manager-functional-914764
	02b7ba0b0ae58       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f                                      6 minutes ago       Exited              kindnet-cni               1                   3f4bbb3dd1065       kindnet-zqd22
	a8651e3149e64       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                      6 minutes ago       Exited              coredns                   1                   c43d8cda87453       coredns-674b8bbfcf-p47zm
	6838fa75831e0       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68                                      6 minutes ago       Exited              kube-proxy                1                   b6887f1bec22c       kube-proxy-ss4s9
	
	
	==> coredns [a8651e3149e641e440e136f2d840345d8a000c042ee306b881bc8e87050dd071] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:60264 - 38594 "HINFO IN 4214401693363052933.6122617693655002349. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.077168203s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [adf6f74785797d4437b006b2d5407947dc2940b3526b84f3de5897b0796b5dca] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:49829 - 28248 "HINFO IN 4785160874892599986.9043715371217379896. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.468228785s
	
	
	==> describe nodes <==
	Name:               functional-914764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-914764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=functional-914764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T17_07_05_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 17:07:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-914764
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 17:14:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 17:12:45 +0000   Sat, 10 May 2025 17:06:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 17:12:45 +0000   Sat, 10 May 2025 17:06:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 17:12:45 +0000   Sat, 10 May 2025 17:06:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 17:12:45 +0000   Sat, 10 May 2025 17:07:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-914764
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	System Info:
	  Machine ID:                 4efca45a9db948e587520e24f4b8739c
	  System UUID:                c4750a4a-b2ad-455b-869c-3f20a6f4d060
	  Boot ID:                    cf43504f-fb83-4d4b-9ff6-27d975437043
	  Kernel Version:             5.15.0-1081-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-qpwnx           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  default                     hello-node-fcfd88b6f-2w246                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  default                     mysql-58ccfd96bb-8pr5j                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     4m59s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 coredns-674b8bbfcf-p47zm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m8s
	  kube-system                 etcd-functional-914764                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m13s
	  kube-system                 kindnet-zqd22                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m8s
	  kube-system                 kube-apiserver-functional-914764              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-controller-manager-functional-914764     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 kube-proxy-ss4s9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m8s
	  kube-system                 kube-scheduler-functional-914764              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-rdnm2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-46hh4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m6s                   kube-proxy       
	  Normal   Starting                 5m35s                  kube-proxy       
	  Normal   Starting                 6m8s                   kube-proxy       
	  Warning  CgroupV1                 7m13s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m13s                  kubelet          Node functional-914764 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m13s                  kubelet          Node functional-914764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m13s                  kubelet          Node functional-914764 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m13s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           7m9s                   node-controller  Node functional-914764 event: Registered Node functional-914764 in Controller
	  Normal   NodeReady                6m26s                  kubelet          Node functional-914764 status is now: NodeReady
	  Normal   RegisteredNode           6m7s                   node-controller  Node functional-914764 event: Registered Node functional-914764 in Controller
	  Normal   NodeHasSufficientMemory  5m41s (x8 over 5m41s)  kubelet          Node functional-914764 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 5m41s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 5m41s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5m41s (x8 over 5m41s)  kubelet          Node functional-914764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m41s (x8 over 5m41s)  kubelet          Node functional-914764 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m34s                  node-controller  Node functional-914764 event: Registered Node functional-914764 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +1.002546] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000006] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.003990] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000004] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000000] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +2.011769] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000002] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000003] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000004] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +4.063544] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000009] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000010] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000006] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.003973] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000005] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +8.191083] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000005] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000000] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000001] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	
	
	==> etcd [36b4d81bf219cafe7496f77936067c3faf0dce6c9f63dbca8380d99503f20ce4] <==
	{"level":"info","ts":"2025-05-10T17:08:05.060111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T17:08:05.060137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-05-10T17:08:05.060156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:05.060204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:05.060236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:05.060247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:05.061492Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-914764 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:08:05.061526Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:08:05.061679Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:08:05.061717Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:08:05.061511Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:08:05.062405Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:08:05.063096Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:08:05.064203Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-05-10T17:08:05.064997Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:08:26.967923Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-05-10T17:08:26.968019Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"functional-914764","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"info","ts":"2025-05-10T17:08:27.106491Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-05-10T17:08:27.106554Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:08:27.106528Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:08:27.106600Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:08:27.106594Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-05-10T17:08:27.109582Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-05-10T17:08:27.109669Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-05-10T17:08:27.109680Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"functional-914764","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [b8aac10c549f9fe207cb7749c9a80728801747d2b052a742422b2d6428f2c0bd] <==
	{"level":"info","ts":"2025-05-10T17:08:37.646642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2025-05-10T17:08:37.646738Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-05-10T17:08:37.646853Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T17:08:37.646900Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T17:08:37.649119Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-05-10T17:08:37.649524Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-05-10T17:08:37.649580Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-05-10T17:08:37.649705Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-05-10T17:08:37.649749Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-05-10T17:08:39.478686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:39.478738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:39.478774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:39.478792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2025-05-10T17:08:39.478834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-05-10T17:08:39.478845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2025-05-10T17:08:39.478868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-05-10T17:08:39.481254Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-914764 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:08:39.481257Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:08:39.481277Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:08:39.481513Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:08:39.481607Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:08:39.482064Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:08:39.482184Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:08:39.482787Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:08:39.482799Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 17:14:17 up  2:56,  0 users,  load average: 0.12, 2.00, 30.11
	Linux functional-914764 5.15.0-1081-gcp #90~20.04.1-Ubuntu SMP Fri Apr 4 18:55:17 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [02b7ba0b0ae587a392ce040f4ae1a585fbf13aeea5d8ef7ca3970bd961801962] <==
	I0510 17:08:03.146176       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0510 17:08:03.146400       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0510 17:08:03.146582       1 main.go:148] setting mtu 1500 for CNI 
	I0510 17:08:03.146603       1 main.go:178] kindnetd IP family: "ipv4"
	I0510 17:08:03.146616       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0510 17:08:03.544905       1 controller.go:361] Starting controller kube-network-policies
	I0510 17:08:03.544933       1 controller.go:365] Waiting for informer caches to sync
	I0510 17:08:03.544940       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	W0510 17:08:07.144823       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "kindnet" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0510 17:08:07.145866       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "kindnet" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0510 17:08:07.145989       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError"
	E0510 17:08:07.145899       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError"
	I0510 17:08:08.545389       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0510 17:08:08.545416       1 metrics.go:61] Registering metrics
	I0510 17:08:08.545470       1 controller.go:401] Syncing nftables rules
	I0510 17:08:13.549407       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:08:13.549466       1 main.go:301] handling current node
	I0510 17:08:23.547496       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:08:23.547546       1 main.go:301] handling current node
	
	
	==> kindnet [b68f5758d229ece0039fe62054ddeb7c47c90779f3224938cd509a5c38a85cd9] <==
	I0510 17:12:11.651587       1 main.go:301] handling current node
	I0510 17:12:21.647567       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:12:21.647612       1 main.go:301] handling current node
	I0510 17:12:31.645709       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:12:31.645746       1 main.go:301] handling current node
	I0510 17:12:41.645582       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:12:41.645625       1 main.go:301] handling current node
	I0510 17:12:51.647499       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:12:51.647556       1 main.go:301] handling current node
	I0510 17:13:01.651552       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:13:01.651593       1 main.go:301] handling current node
	I0510 17:13:11.651528       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:13:11.651569       1 main.go:301] handling current node
	I0510 17:13:21.647505       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:13:21.647550       1 main.go:301] handling current node
	I0510 17:13:31.651506       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:13:31.651547       1 main.go:301] handling current node
	I0510 17:13:41.645244       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:13:41.645290       1 main.go:301] handling current node
	I0510 17:13:51.647565       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:13:51.647605       1 main.go:301] handling current node
	I0510 17:14:01.645723       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:14:01.645760       1 main.go:301] handling current node
	I0510 17:14:11.651504       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:14:11.651555       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6471b9d7617bf5433e4f0daaece1f4915cb330eb85fdf4e0cb3c343d71412587] <==
	I0510 17:08:41.464062       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0510 17:08:42.103972       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0510 17:08:42.193157       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0510 17:08:42.236018       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0510 17:08:42.240611       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0510 17:08:44.216017       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0510 17:08:44.264564       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0510 17:08:44.315490       1 controller.go:667] quota admission added evaluator for: endpoints
	I0510 17:08:44.369275       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:08:44.374535       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:00.980570       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:00.983440       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.62.155"}
	I0510 17:09:04.386521       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:05.053521       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.211.229"}
	I0510 17:09:05.054668       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:15.239845       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.47.90"}
	I0510 17:09:15.240753       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:16.894017       1 controller.go:667] quota admission added evaluator for: namespaces
	I0510 17:09:17.080334       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.6.195"}
	I0510 17:09:17.084105       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:17.159466       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.76.37"}
	I0510 17:09:18.484531       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.96.210.62"}
	I0510 17:09:18.485190       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:12:16.052702       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.205.43"}
	I0510 17:12:16.053648       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [3bfba7be3e9f996d96db7394046635e3253dfecd4da4ed607987d9ab5c5045c1] <==
	I0510 17:08:10.021052       1 shared_informer.go:357] "Caches are synced" controller="HPA"
	I0510 17:08:10.037710       1 shared_informer.go:357] "Caches are synced" controller="PVC protection"
	I0510 17:08:10.043296       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0510 17:08:10.044486       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0510 17:08:10.045673       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0510 17:08:10.066802       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0510 17:08:10.066848       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	I0510 17:08:10.069125       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 17:08:10.139043       1 shared_informer.go:357] "Caches are synced" controller="crt configmap"
	I0510 17:08:10.141288       1 shared_informer.go:357] "Caches are synced" controller="ReplicationController"
	I0510 17:08:10.165800       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0510 17:08:10.174775       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0510 17:08:10.174899       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0510 17:08:10.174984       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-914764"
	I0510 17:08:10.175032       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0510 17:08:10.222432       1 shared_informer.go:357] "Caches are synced" controller="bootstrap_signer"
	I0510 17:08:10.288219       1 shared_informer.go:357] "Caches are synced" controller="service-cidr-controller"
	I0510 17:08:10.316954       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0510 17:08:10.320641       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:08:10.324531       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:08:10.365651       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0510 17:08:10.732906       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:08:10.816644       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:08:10.816672       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 17:08:10.816682       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [636c2672ef1fea08eeafc7079d9aa4d4733f3c056ec93159d244a784a22e43da] <==
	I0510 17:08:43.876216       1 shared_informer.go:357] "Caches are synced" controller="ephemeral"
	I0510 17:08:43.955562       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrapproving"
	I0510 17:08:43.962641       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0510 17:08:43.963797       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0510 17:08:43.963858       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0510 17:08:43.963972       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0510 17:08:44.050452       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0510 17:08:44.071645       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 17:08:44.095600       1 shared_informer.go:357] "Caches are synced" controller="daemon sets"
	I0510 17:08:44.155234       1 shared_informer.go:357] "Caches are synced" controller="crt configmap"
	I0510 17:08:44.157536       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 17:08:44.167983       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:08:44.187985       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:08:44.212501       1 shared_informer.go:357] "Caches are synced" controller="bootstrap_signer"
	I0510 17:08:44.584724       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:08:44.593992       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:08:44.594020       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 17:08:44.594034       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0510 17:09:16.949269       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.954705       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.959076       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.964078       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.964177       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.968771       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.972684       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [6838fa75831e0f1aa21700386c517d35bbaca85b8849dba5650f9e4d0cfa7a3b] <==
	I0510 17:08:03.048021       1 server_linux.go:63] "Using iptables proxy"
	E0510 17:08:07.068328       1 server.go:704] "Failed to retrieve node info" err="nodes \"functional-914764\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]"
	I0510 17:08:08.145545       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0510 17:08:08.145631       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:08:08.169501       1 server.go:254] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0510 17:08:08.169578       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:08:08.175554       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:08:08.176045       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:08:08.176078       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:08:08.177781       1 config.go:199] "Starting service config controller"
	I0510 17:08:08.177809       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:08:08.177825       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:08:08.177841       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:08:08.177847       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:08:08.177825       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:08:08.177888       1 config.go:329] "Starting node config controller"
	I0510 17:08:08.178550       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:08:08.278031       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:08:08.278045       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 17:08:08.278074       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:08:08.279352       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [b7b96114d08323f0c0749fe7ebd64d1df4f6a61b2525f53d2a5de31fd7d263f1] <==
	I0510 17:08:41.170422       1 server_linux.go:63] "Using iptables proxy"
	I0510 17:08:41.291189       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0510 17:08:41.291250       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:08:41.312036       1 server.go:254] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0510 17:08:41.312087       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:08:41.316467       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:08:41.316877       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:08:41.316900       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:08:41.318020       1 config.go:199] "Starting service config controller"
	I0510 17:08:41.318038       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:08:41.318058       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:08:41.318057       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:08:41.318109       1 config.go:329] "Starting node config controller"
	I0510 17:08:41.318128       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:08:41.318169       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:08:41.318225       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:08:41.419146       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 17:08:41.419344       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:08:41.419365       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:08:41.419405       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [141001f7a575a10c1de330a29ea37d300d597392e874c6a7c33bd009a9651034] <==
	I0510 17:08:38.434175       1 serving.go:386] Generated self-signed cert in-memory
	W0510 17:08:40.484683       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0510 17:08:40.484841       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0510 17:08:40.484923       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0510 17:08:40.484964       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0510 17:08:40.560996       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 17:08:40.561024       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:08:40.563429       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:08:40.563494       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:08:40.564819       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 17:08:40.565046       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 17:08:40.664548       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [492f1e9244ec13fb014809793c1c73610cc391bde33313e524517243557fcd3c] <==
	I0510 17:08:04.760269       1 serving.go:386] Generated self-signed cert in-memory
	W0510 17:08:07.065829       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0510 17:08:07.065931       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0510 17:08:07.066005       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0510 17:08:07.066051       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0510 17:08:07.252411       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 17:08:07.252458       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:08:07.256227       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 17:08:07.256348       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:08:07.256956       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:08:07.256379       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 17:08:07.357515       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:08:26.967154       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0510 17:08:26.967309       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0510 17:08:26.967454       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 10 17:13:36 functional-914764 kubelet[5289]: E0510 17:13:36.795011    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-ec13ac1e0e4da523c114a564c340f288d4a9f379b496ef1f97418f3c68b0f62b: Error finding container ec13ac1e0e4da523c114a564c340f288d4a9f379b496ef1f97418f3c68b0f62b: Status 404 returned error can't find the container with id ec13ac1e0e4da523c114a564c340f288d4a9f379b496ef1f97418f3c68b0f62b
	May 10 17:13:36 functional-914764 kubelet[5289]: E0510 17:13:36.795237    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-0af14ac9c03a43e1354e75b0ef7783e2a4f8468e626422b232f8f3f40405c199: Error finding container 0af14ac9c03a43e1354e75b0ef7783e2a4f8468e626422b232f8f3f40405c199: Status 404 returned error can't find the container with id 0af14ac9c03a43e1354e75b0ef7783e2a4f8468e626422b232f8f3f40405c199
	May 10 17:13:36 functional-914764 kubelet[5289]: E0510 17:13:36.795467    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-de7832d34774b2eac76e97a8bdd51323a2d5328a16cb8f9474da47c36c123945: Error finding container de7832d34774b2eac76e97a8bdd51323a2d5328a16cb8f9474da47c36c123945: Status 404 returned error can't find the container with id de7832d34774b2eac76e97a8bdd51323a2d5328a16cb8f9474da47c36c123945
	May 10 17:13:36 functional-914764 kubelet[5289]: E0510 17:13:36.795687    5289 manager.go:1116] Failed to create existing container: /crio-c43d8cda8745382d4c12fa7b076d17572b5e7206a883f07311d28f1d52f39fed: Error finding container c43d8cda8745382d4c12fa7b076d17572b5e7206a883f07311d28f1d52f39fed: Status 404 returned error can't find the container with id c43d8cda8745382d4c12fa7b076d17572b5e7206a883f07311d28f1d52f39fed
	May 10 17:13:36 functional-914764 kubelet[5289]: E0510 17:13:36.795854    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-e66750980305125d85dfd8c89bf2f42a7b7cd69ff38cd695017d3cecec46b744: Error finding container e66750980305125d85dfd8c89bf2f42a7b7cd69ff38cd695017d3cecec46b744: Status 404 returned error can't find the container with id e66750980305125d85dfd8c89bf2f42a7b7cd69ff38cd695017d3cecec46b744
	May 10 17:13:36 functional-914764 kubelet[5289]: E0510 17:13:36.796026    5289 manager.go:1116] Failed to create existing container: /crio-b6887f1bec22c78a5398af50959b4410c0c422063ef74bfbd69990b2349f30ea: Error finding container b6887f1bec22c78a5398af50959b4410c0c422063ef74bfbd69990b2349f30ea: Status 404 returned error can't find the container with id b6887f1bec22c78a5398af50959b4410c0c422063ef74bfbd69990b2349f30ea
	May 10 17:13:36 functional-914764 kubelet[5289]: E0510 17:13:36.796189    5289 manager.go:1116] Failed to create existing container: /crio-9c2c1b285f992cc4cb2312a150f454fc505af4db9cb9e2c6bdd6f68d6faf39d3: Error finding container 9c2c1b285f992cc4cb2312a150f454fc505af4db9cb9e2c6bdd6f68d6faf39d3: Status 404 returned error can't find the container with id 9c2c1b285f992cc4cb2312a150f454fc505af4db9cb9e2c6bdd6f68d6faf39d3
	May 10 17:13:36 functional-914764 kubelet[5289]: E0510 17:13:36.796326    5289 manager.go:1116] Failed to create existing container: /crio-0af14ac9c03a43e1354e75b0ef7783e2a4f8468e626422b232f8f3f40405c199: Error finding container 0af14ac9c03a43e1354e75b0ef7783e2a4f8468e626422b232f8f3f40405c199: Status 404 returned error can't find the container with id 0af14ac9c03a43e1354e75b0ef7783e2a4f8468e626422b232f8f3f40405c199
	May 10 17:13:36 functional-914764 kubelet[5289]: E0510 17:13:36.796482    5289 manager.go:1116] Failed to create existing container: /crio-37323ca93e4ca94bc78e1014a60661db35194965c286bc811c2d9314d9b9afc2: Error finding container 37323ca93e4ca94bc78e1014a60661db35194965c286bc811c2d9314d9b9afc2: Status 404 returned error can't find the container with id 37323ca93e4ca94bc78e1014a60661db35194965c286bc811c2d9314d9b9afc2
	May 10 17:13:36 functional-914764 kubelet[5289]: E0510 17:13:36.846151    5289 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897216845936704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:13:36 functional-914764 kubelet[5289]: E0510 17:13:36.846183    5289 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897216845936704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:13:46 functional-914764 kubelet[5289]: E0510 17:13:46.848260    5289 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897226848079904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:13:46 functional-914764 kubelet[5289]: E0510 17:13:46.848297    5289 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897226848079904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:13:47 functional-914764 kubelet[5289]: E0510 17:13:47.676848    5289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="57253c8a-3401-46d1-bcfd-f34e9be17cbf"
	May 10 17:13:56 functional-914764 kubelet[5289]: E0510 17:13:56.850024    5289 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897236849805489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:13:56 functional-914764 kubelet[5289]: E0510 17:13:56.850066    5289 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897236849805489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:14:03 functional-914764 kubelet[5289]: E0510 17:14:03.568585    5289 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	May 10 17:14:03 functional-914764 kubelet[5289]: E0510 17:14:03.568668    5289 kuberuntime_image.go:42] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	May 10 17:14:03 functional-914764 kubelet[5289]: E0510 17:14:03.568986    5289 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kubernetes-dashboard,Image:docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Command:[],Args:[--namespace=kubernetes-dashboard --enable-skip-login --disable-settings-authorizer],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tj4gq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTP
Get:&HTTPGetAction{Path:/,Port:{0 9090 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kubernetes-dashboard-7779f9b69b-46hh4_kubernetes-dashboard(ee332b30-fa84-4314-8387-354ccbfe05fa): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24
f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	May 10 17:14:03 functional-914764 kubelet[5289]: E0510 17:14:03.570193    5289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-46hh4" podUID="ee332b30-fa84-4314-8387-354ccbfe05fa"
	May 10 17:14:06 functional-914764 kubelet[5289]: E0510 17:14:06.851525    5289 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897246851325783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:14:06 functional-914764 kubelet[5289]: E0510 17:14:06.851562    5289 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897246851325783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:14:14 functional-914764 kubelet[5289]: E0510 17:14:14.676175    5289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-46hh4" podUID="ee332b30-fa84-4314-8387-354ccbfe05fa"
	May 10 17:14:16 functional-914764 kubelet[5289]: E0510 17:14:16.853449    5289 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897256853202628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:14:16 functional-914764 kubelet[5289]: E0510 17:14:16.853505    5289 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897256853202628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b49024e3234d94289dd8c47d384de72c28b1c82feee53d59246f24a29071365a] <==
	I0510 17:08:16.413888       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0510 17:08:16.421502       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0510 17:08:16.421544       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0510 17:08:16.423609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:08:19.878977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:08:24.138806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cdd649f2b2a77111f55c977d96d98353d1c743b9d526a9f779c982ca9e23bed6] <==
	W0510 17:13:51.705128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:13:53.708529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:13:53.712642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:13:55.715511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:13:55.720304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:13:57.725036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:13:57.729279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:13:59.731864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:13:59.736950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:01.739582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:01.743780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:03.747289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:03.751142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:05.754060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:05.758143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:07.761294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:07.765376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:09.768640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:09.773944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:11.776759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:11.780324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:13.782945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:13.786725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:15.789548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:14:15.793344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-914764 -n functional-914764
helpers_test.go:261: (dbg) Run:  kubectl --context functional-914764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-8pr5j nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-rdnm2 kubernetes-dashboard-7779f9b69b-46hh4
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-914764 describe pod busybox-mount mysql-58ccfd96bb-8pr5j nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-rdnm2 kubernetes-dashboard-7779f9b69b-46hh4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-914764 describe pod busybox-mount mysql-58ccfd96bb-8pr5j nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-rdnm2 kubernetes-dashboard-7779f9b69b-46hh4: exit status 1 (86.176344ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-914764/192.168.49.2
	Start Time:       Sat, 10 May 2025 17:09:07 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://92a1bf27d6bc4ba59b72bae94c43e1d4bd97d05ed9f91e5e84a11c3abda97a8e
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 10 May 2025 17:09:09 +0000
	      Finished:     Sat, 10 May 2025 17:09:09 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-54pr4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-54pr4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m10s  default-scheduler  Successfully assigned default/busybox-mount to functional-914764
	  Normal  Pulling    5m10s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m9s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.116s (1.152s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m9s   kubelet            Created container: mount-munger
	  Normal  Started    5m9s   kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-8pr5j
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-914764/192.168.49.2
	Start Time:       Sat, 10 May 2025 17:09:18 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pp6ql (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pp6ql:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  4m59s              default-scheduler  Successfully assigned default/mysql-58ccfd96bb-8pr5j to functional-914764
	  Warning  Failed     2m17s              kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m17s              kubelet            Error: ErrImagePull
	  Normal   BackOff    2m17s              kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m17s              kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m5s (x2 over 5m)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-914764/192.168.49.2
	Start Time:       Sat, 10 May 2025 17:09:15 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2hkc4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2hkc4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  5m2s                default-scheduler  Successfully assigned default/nginx-svc to functional-914764
	  Warning  Failed     46s (x2 over 4m4s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     46s (x2 over 4m4s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    31s (x2 over 4m3s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     31s (x2 over 4m3s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    20s (x3 over 5m3s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-914764/192.168.49.2
	Start Time:       Sat, 10 May 2025 17:09:13 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qdrjr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-qdrjr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m4s                 default-scheduler  Successfully assigned default/sp-pod to functional-914764
	  Warning  Failed     4m34s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     76s (x2 over 4m34s)  kubelet            Error: ErrImagePull
	  Warning  Failed     76s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    61s (x2 over 4m34s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     61s (x2 over 4m34s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    47s (x3 over 5m5s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-rdnm2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-46hh4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-914764 describe pod busybox-mount mysql-58ccfd96bb-8pr5j nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-rdnm2 kubernetes-dashboard-7779f9b69b-46hh4: exit status 1
I0510 17:14:22.124526  729815 retry.go:31] will retry after 36.019676322s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/DashboardCmd (302.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (187.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d2efe3d9-d778-4ad0-96e5-cd855ae099e7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004200966s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-914764 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-914764 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-914764 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-914764 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [eeddb1c3-ed41-47e2-bda7-e307dfa62ee9] Pending
helpers_test.go:344: "sp-pod" [eeddb1c3-ed41-47e2-bda7-e307dfa62ee9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-914764 -n functional-914764
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-05-10 17:12:13.641119508 +0000 UTC m=+1073.322906634
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-914764 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-914764 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-914764/192.168.49.2
Start Time:       Sat, 10 May 2025 17:09:13 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:  10.244.0.6
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qdrjr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-qdrjr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  3m                  default-scheduler  Successfully assigned default/sp-pod to functional-914764
Warning  Failed     2m29s               kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m29s               kubelet            Error: ErrImagePull
Normal   BackOff    2m29s               kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     2m29s               kubelet            Error: ImagePullBackOff
Normal   Pulling    2m16s (x2 over 3m)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-914764 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-914764 logs sp-pod -n default: exit status 1 (66.286425ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-914764 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-914764
helpers_test.go:235: (dbg) docker inspect functional-914764:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee",
	        "Created": "2025-05-10T17:06:49.422708893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 754053,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-05-10T17:06:49.453322641Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e9e814e304601d171cd7a05fe946703c6fbd63c3e77415c5bcfe31c3cddbbe5f",
	        "ResolvConfPath": "/var/lib/docker/containers/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/hostname",
	        "HostsPath": "/var/lib/docker/containers/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/hosts",
	        "LogPath": "/var/lib/docker/containers/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee-json.log",
	        "Name": "/functional-914764",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-914764:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-914764",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee",
	                "LowerDir": "/var/lib/docker/overlay2/a65c263392192e974528fcb4ab1d1977dd7dbb93115efe2211b9afab4e57d5bf-init/diff:/var/lib/docker/overlay2/d562a19931b28d74981554e3e67ffc7804c8c483ec96f024e40ef2be1bf23f73/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a65c263392192e974528fcb4ab1d1977dd7dbb93115efe2211b9afab4e57d5bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a65c263392192e974528fcb4ab1d1977dd7dbb93115efe2211b9afab4e57d5bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a65c263392192e974528fcb4ab1d1977dd7dbb93115efe2211b9afab4e57d5bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-914764",
	                "Source": "/var/lib/docker/volumes/functional-914764/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-914764",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-914764",
	                "name.minikube.sigs.k8s.io": "functional-914764",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7e2d38dbb98458e30a706f6d63dc138fab1cec70f2a44b374b988cafd346778a",
	            "SandboxKey": "/var/run/docker/netns/7e2d38dbb984",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-914764": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:f9:fd:52:52:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d0ace7f37eb64a81ed1332813173719935e5f7095abb26b25ddc6868822634c8",
	                    "EndpointID": "eaa24ac01cabdc03b95e86296753da9c998d67e4511d7d1bf8452d31f81aba08",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-914764",
	                        "64f37bce315f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-914764 -n functional-914764
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-914764 logs -n 25: (1.378632082s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|-----------|-----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                 Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|-----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service   | functional-914764 service list                                        | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|           | -o json                                                               |                   |         |         |                     |                     |
	| tunnel    | functional-914764 tunnel                                              | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC |                     |
	|           | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| tunnel    | functional-914764 tunnel                                              | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC |                     |
	|           | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| service   | functional-914764 service                                             | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|           | --namespace=default --https                                           |                   |         |         |                     |                     |
	|           | --url hello-node                                                      |                   |         |         |                     |                     |
	| mount     | -p functional-914764                                                  | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup620297487/001:/mount2 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                |                   |         |         |                     |                     |
	| mount     | -p functional-914764                                                  | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup620297487/001:/mount3 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                |                   |         |         |                     |                     |
	| mount     | -p functional-914764                                                  | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup620297487/001:/mount1 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                |                   |         |         |                     |                     |
	| ssh       | functional-914764 ssh findmnt                                         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC |                     |
	|           | -T /mount1                                                            |                   |         |         |                     |                     |
	| tunnel    | functional-914764 tunnel                                              | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC |                     |
	|           | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| service   | functional-914764                                                     | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|           | service hello-node --url                                              |                   |         |         |                     |                     |
	|           | --format={{.IP}}                                                      |                   |         |         |                     |                     |
	| ssh       | functional-914764 ssh findmnt                                         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|           | -T /mount1                                                            |                   |         |         |                     |                     |
	| service   | functional-914764 service                                             | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|           | hello-node --url                                                      |                   |         |         |                     |                     |
	| ssh       | functional-914764 ssh findmnt                                         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|           | -T /mount2                                                            |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                    | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC |                     |
	|           | -p functional-914764                                                  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                |                   |         |         |                     |                     |
	| ssh       | functional-914764 ssh findmnt                                         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|           | -T /mount3                                                            |                   |         |         |                     |                     |
	| mount     | -p functional-914764                                                  | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC |                     |
	|           | --kill=true                                                           |                   |         |         |                     |                     |
	| ssh       | functional-914764 ssh sudo cat                                        | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|           | /etc/ssl/certs/729815.pem                                             |                   |         |         |                     |                     |
	| ssh       | functional-914764 ssh sudo cat                                        | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|           | /usr/share/ca-certificates/729815.pem                                 |                   |         |         |                     |                     |
	| ssh       | functional-914764 ssh sudo cat                                        | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|           | /etc/ssl/certs/51391683.0                                             |                   |         |         |                     |                     |
	| ssh       | functional-914764 ssh sudo cat                                        | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|           | /etc/ssl/certs/7298152.pem                                            |                   |         |         |                     |                     |
	| ssh       | functional-914764 ssh sudo cat                                        | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|           | /usr/share/ca-certificates/7298152.pem                                |                   |         |         |                     |                     |
	| ssh       | functional-914764 ssh sudo cat                                        | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|           | /etc/ssl/certs/3ec20f2e.0                                             |                   |         |         |                     |                     |
	| addons    | functional-914764 addons list                                         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	| addons    | functional-914764 addons list                                         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|           | -o json                                                               |                   |         |         |                     |                     |
	| ssh       | functional-914764 ssh sudo cat                                        | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|           | /etc/test/nested/copy/729815/hosts                                    |                   |         |         |                     |                     |
	|-----------|-----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:09:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:09:07.036160  765023 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:09:07.036269  765023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:09:07.036279  765023 out.go:358] Setting ErrFile to fd 2...
	I0510 17:09:07.036293  765023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:09:07.036609  765023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:09:07.037215  765023 out.go:352] Setting JSON to false
	I0510 17:09:07.038373  765023 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10294,"bootTime":1746886653,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:09:07.038452  765023 start.go:140] virtualization: kvm guest
	I0510 17:09:07.040824  765023 out.go:177] * [functional-914764] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:09:07.042266  765023 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:09:07.042259  765023 notify.go:220] Checking for updates...
	I0510 17:09:07.044180  765023 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:09:07.045712  765023 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:09:07.047257  765023 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 17:09:07.048582  765023 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:09:07.061178  765023 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:09:06.994925  764930 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:09:06.995691  764930 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:09:07.020812  764930 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 17:09:07.020967  764930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:09:07.093709  764930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-05-10 17:09:07.082073403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:09:07.093867  764930 docker.go:318] overlay module found
	I0510 17:09:07.096931  764930 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0510 17:09:07.098502  764930 start.go:304] selected driver: docker
	I0510 17:09:07.098523  764930 start.go:908] validating driver "docker" against &{Name:functional-914764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-914764 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:09:07.098633  764930 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:09:07.101636  764930 out.go:201] 
	W0510 17:09:07.103269  764930 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0510 17:09:07.104409  764930 out.go:201] 
	I0510 17:09:07.064612  765023 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:09:07.065328  765023 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:09:07.094773  765023 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 17:09:07.094882  765023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:09:07.193650  765023 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-05-10 17:09:07.175472465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:09:07.193762  765023 docker.go:318] overlay module found
	I0510 17:09:07.196248  765023 out.go:177] * Using the docker driver based on existing profile
	I0510 17:09:07.197931  765023 start.go:304] selected driver: docker
	I0510 17:09:07.197953  765023 start.go:908] validating driver "docker" against &{Name:functional-914764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-914764 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:09:07.198064  765023 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:09:07.198176  765023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:09:07.269866  765023 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-05-10 17:09:07.252364205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:09:07.270735  765023 cni.go:84] Creating CNI manager for ""
	I0510 17:09:07.270821  765023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 17:09:07.270900  765023 start.go:347] cluster config:
	{Name:functional-914764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-914764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:09:07.273942  765023 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	May 10 17:09:44 functional-914764 crio[4926]: time="2025-05-10 17:09:44.334634922Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	May 10 17:10:14 functional-914764 crio[4926]: time="2025-05-10 17:10:14.954141542Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=01d5d5d8-38ab-4dfe-a018-4630797a44d2 name=/runtime.v1.ImageService/PullImage
	May 10 17:10:14 functional-914764 crio[4926]: time="2025-05-10 17:10:14.958136001Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 17:10:15 functional-914764 crio[4926]: time="2025-05-10 17:10:15.002217074Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=9231bd4d-1925-432c-b0d0-5499def99263 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:10:15 functional-914764 crio[4926]: time="2025-05-10 17:10:15.002461716Z" level=info msg="Image docker.io/nginx:alpine not found" id=9231bd4d-1925-432c-b0d0-5499def99263 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:10:29 functional-914764 crio[4926]: time="2025-05-10 17:10:29.675714657Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=33aaf545-b1de-4884-bac4-0c6beae2e3c6 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:10:29 functional-914764 crio[4926]: time="2025-05-10 17:10:29.676094132Z" level=info msg="Image docker.io/nginx:alpine not found" id=33aaf545-b1de-4884-bac4-0c6beae2e3c6 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:10:59 functional-914764 crio[4926]: time="2025-05-10 17:10:59.961490096Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=b83379c7-5fe6-4c98-8ea0-0ad4d3a1b620 name=/runtime.v1.ImageService/PullImage
	May 10 17:10:59 functional-914764 crio[4926]: time="2025-05-10 17:10:59.962848798Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	May 10 17:11:00 functional-914764 crio[4926]: time="2025-05-10 17:11:00.099651318Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b5138952-af5e-43c7-b77f-900dbc905649 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:11:00 functional-914764 crio[4926]: time="2025-05-10 17:11:00.099944480Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b5138952-af5e-43c7-b77f-900dbc905649 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:11:14 functional-914764 crio[4926]: time="2025-05-10 17:11:14.677298053Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ae1839d5-b3e1-4696-984c-454d0194f102 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:11:14 functional-914764 crio[4926]: time="2025-05-10 17:11:14.677658343Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=ae1839d5-b3e1-4696-984c-454d0194f102 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:11:30 functional-914764 crio[4926]: time="2025-05-10 17:11:30.584086824Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=6bf517e5-60f9-4a43-b273-8f2c2ef1e4c3 name=/runtime.v1.ImageService/PullImage
	May 10 17:11:30 functional-914764 crio[4926]: time="2025-05-10 17:11:30.600597600Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	May 10 17:11:31 functional-914764 crio[4926]: time="2025-05-10 17:11:31.162705365Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=346c6d23-276d-4709-9fe7-f20c85666a4c name=/runtime.v1.ImageService/ImageStatus
	May 10 17:11:31 functional-914764 crio[4926]: time="2025-05-10 17:11:31.163055608Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=346c6d23-276d-4709-9fe7-f20c85666a4c name=/runtime.v1.ImageService/ImageStatus
	May 10 17:11:42 functional-914764 crio[4926]: time="2025-05-10 17:11:42.676148653Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=47d46b7a-fb42-4c84-82e2-1dcddee47ac4 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:11:42 functional-914764 crio[4926]: time="2025-05-10 17:11:42.676542906Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=47d46b7a-fb42-4c84-82e2-1dcddee47ac4 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:12:01 functional-914764 crio[4926]: time="2025-05-10 17:12:01.207993219Z" level=info msg="Pulling image: docker.io/nginx:latest" id=79ffb709-0f4c-4dba-aa04-9d93745212b0 name=/runtime.v1.ImageService/PullImage
	May 10 17:12:01 functional-914764 crio[4926]: time="2025-05-10 17:12:01.209144611Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	May 10 17:12:01 functional-914764 crio[4926]: time="2025-05-10 17:12:01.224784241Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=cb35a55e-a20b-4744-abbf-bfa876da3055 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:12:01 functional-914764 crio[4926]: time="2025-05-10 17:12:01.224978841Z" level=info msg="Image docker.io/mysql:5.7 not found" id=cb35a55e-a20b-4744-abbf-bfa876da3055 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:12:13 functional-914764 crio[4926]: time="2025-05-10 17:12:13.675961008Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=e2ec88ec-7249-4c44-b15b-536462c1b8ea name=/runtime.v1.ImageService/ImageStatus
	May 10 17:12:13 functional-914764 crio[4926]: time="2025-05-10 17:12:13.676280052Z" level=info msg="Image docker.io/mysql:5.7 not found" id=e2ec88ec-7249-4c44-b15b-536462c1b8ea name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	92a1bf27d6bc4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   3 minutes ago       Exited              mount-munger              0                   49250225d3fc2       busybox-mount
	275c4bd8a45f7       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969    3 minutes ago       Running             echoserver                0                   8f4cdec4dfd2e       hello-node-fcfd88b6f-2w246
	adf6f74785797       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                      3 minutes ago       Running             coredns                   2                   c43d8cda87453       coredns-674b8bbfcf-p47zm
	b68f5758d229e       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f                                      3 minutes ago       Running             kindnet-cni               2                   3f4bbb3dd1065       kindnet-zqd22
	b7b96114d0832       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68                                      3 minutes ago       Running             kube-proxy                2                   b6887f1bec22c       kube-proxy-ss4s9
	cdd649f2b2a77       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       3                   ec13ac1e0e4da       storage-provisioner
	6471b9d7617bf       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4                                      3 minutes ago       Running             kube-apiserver            0                   66fd7bd7785a2       kube-apiserver-functional-914764
	636c2672ef1fe       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02                                      3 minutes ago       Running             kube-controller-manager   2                   de7832d34774b       kube-controller-manager-functional-914764
	b8aac10c549f9       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                      3 minutes ago       Running             etcd                      2                   9c2c1b285f992       etcd-functional-914764
	141001f7a575a       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4                                      3 minutes ago       Running             kube-scheduler            2                   e667509803051       kube-scheduler-functional-914764
	b49024e3234d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       2                   ec13ac1e0e4da       storage-provisioner
	492f1e9244ec1       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4                                      4 minutes ago       Exited              kube-scheduler            1                   e667509803051       kube-scheduler-functional-914764
	36b4d81bf219c       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                      4 minutes ago       Exited              etcd                      1                   9c2c1b285f992       etcd-functional-914764
	3bfba7be3e9f9       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02                                      4 minutes ago       Exited              kube-controller-manager   1                   de7832d34774b       kube-controller-manager-functional-914764
	02b7ba0b0ae58       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f                                      4 minutes ago       Exited              kindnet-cni               1                   3f4bbb3dd1065       kindnet-zqd22
	a8651e3149e64       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                      4 minutes ago       Exited              coredns                   1                   c43d8cda87453       coredns-674b8bbfcf-p47zm
	6838fa75831e0       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68                                      4 minutes ago       Exited              kube-proxy                1                   b6887f1bec22c       kube-proxy-ss4s9
	
	
	==> coredns [a8651e3149e641e440e136f2d840345d8a000c042ee306b881bc8e87050dd071] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:60264 - 38594 "HINFO IN 4214401693363052933.6122617693655002349. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.077168203s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [adf6f74785797d4437b006b2d5407947dc2940b3526b84f3de5897b0796b5dca] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:49829 - 28248 "HINFO IN 4785160874892599986.9043715371217379896. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.468228785s
	
	
	==> describe nodes <==
	Name:               functional-914764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-914764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=functional-914764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T17_07_05_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 17:07:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-914764
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 17:12:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 17:09:41 +0000   Sat, 10 May 2025 17:06:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 17:09:41 +0000   Sat, 10 May 2025 17:06:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 17:09:41 +0000   Sat, 10 May 2025 17:06:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 17:09:41 +0000   Sat, 10 May 2025 17:07:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-914764
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	System Info:
	  Machine ID:                 4efca45a9db948e587520e24f4b8739c
	  System UUID:                c4750a4a-b2ad-455b-869c-3f20a6f4d060
	  Boot ID:                    cf43504f-fb83-4d4b-9ff6-27d975437043
	  Kernel Version:             5.15.0-1081-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-fcfd88b6f-2w246                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  default                     mysql-58ccfd96bb-8pr5j                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     2m56s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 coredns-674b8bbfcf-p47zm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m5s
	  kube-system                 etcd-functional-914764                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m10s
	  kube-system                 kindnet-zqd22                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m5s
	  kube-system                 kube-apiserver-functional-914764              250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 kube-controller-manager-functional-914764     200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-proxy-ss4s9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-scheduler-functional-914764              100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-rdnm2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-46hh4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m3s                   kube-proxy       
	  Normal   Starting                 3m33s                  kube-proxy       
	  Normal   Starting                 4m6s                   kube-proxy       
	  Warning  CgroupV1                 5m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m10s                  kubelet          Node functional-914764 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m10s                  kubelet          Node functional-914764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m10s                  kubelet          Node functional-914764 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m10s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m6s                   node-controller  Node functional-914764 event: Registered Node functional-914764 in Controller
	  Normal   NodeReady                4m23s                  kubelet          Node functional-914764 status is now: NodeReady
	  Normal   RegisteredNode           4m4s                   node-controller  Node functional-914764 event: Registered Node functional-914764 in Controller
	  Normal   NodeHasSufficientMemory  3m38s (x8 over 3m38s)  kubelet          Node functional-914764 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 3m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 3m38s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    3m38s (x8 over 3m38s)  kubelet          Node functional-914764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m38s (x8 over 3m38s)  kubelet          Node functional-914764 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m31s                  node-controller  Node functional-914764 event: Registered Node functional-914764 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +1.002546] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000006] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.003990] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000004] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000000] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +2.011769] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000002] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000003] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000004] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +4.063544] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000009] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000010] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000006] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.003973] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000005] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +8.191083] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000005] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000000] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000001] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	
	
	==> etcd [36b4d81bf219cafe7496f77936067c3faf0dce6c9f63dbca8380d99503f20ce4] <==
	{"level":"info","ts":"2025-05-10T17:08:05.060111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T17:08:05.060137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-05-10T17:08:05.060156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:05.060204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:05.060236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:05.060247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:05.061492Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-914764 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:08:05.061526Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:08:05.061679Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:08:05.061717Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:08:05.061511Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:08:05.062405Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:08:05.063096Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:08:05.064203Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-05-10T17:08:05.064997Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:08:26.967923Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-05-10T17:08:26.968019Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"functional-914764","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"info","ts":"2025-05-10T17:08:27.106491Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-05-10T17:08:27.106554Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:08:27.106528Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:08:27.106600Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:08:27.106594Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-05-10T17:08:27.109582Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-05-10T17:08:27.109669Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-05-10T17:08:27.109680Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"functional-914764","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [b8aac10c549f9fe207cb7749c9a80728801747d2b052a742422b2d6428f2c0bd] <==
	{"level":"info","ts":"2025-05-10T17:08:37.646642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2025-05-10T17:08:37.646738Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-05-10T17:08:37.646853Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T17:08:37.646900Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T17:08:37.649119Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-05-10T17:08:37.649524Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-05-10T17:08:37.649580Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-05-10T17:08:37.649705Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-05-10T17:08:37.649749Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-05-10T17:08:39.478686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:39.478738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:39.478774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:39.478792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2025-05-10T17:08:39.478834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-05-10T17:08:39.478845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2025-05-10T17:08:39.478868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-05-10T17:08:39.481254Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-914764 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:08:39.481257Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:08:39.481277Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:08:39.481513Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:08:39.481607Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:08:39.482064Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:08:39.482184Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:08:39.482787Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:08:39.482799Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 17:12:14 up  2:54,  0 users,  load average: 0.12, 2.92, 34.25
	Linux functional-914764 5.15.0-1081-gcp #90~20.04.1-Ubuntu SMP Fri Apr 4 18:55:17 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [02b7ba0b0ae587a392ce040f4ae1a585fbf13aeea5d8ef7ca3970bd961801962] <==
	I0510 17:08:03.146176       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0510 17:08:03.146400       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0510 17:08:03.146582       1 main.go:148] setting mtu 1500 for CNI 
	I0510 17:08:03.146603       1 main.go:178] kindnetd IP family: "ipv4"
	I0510 17:08:03.146616       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0510 17:08:03.544905       1 controller.go:361] Starting controller kube-network-policies
	I0510 17:08:03.544933       1 controller.go:365] Waiting for informer caches to sync
	I0510 17:08:03.544940       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	W0510 17:08:07.144823       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "kindnet" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0510 17:08:07.145866       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "kindnet" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0510 17:08:07.145989       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError"
	E0510 17:08:07.145899       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError"
	I0510 17:08:08.545389       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0510 17:08:08.545416       1 metrics.go:61] Registering metrics
	I0510 17:08:08.545470       1 controller.go:401] Syncing nftables rules
	I0510 17:08:13.549407       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:08:13.549466       1 main.go:301] handling current node
	I0510 17:08:23.547496       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:08:23.547546       1 main.go:301] handling current node
	
	
	==> kindnet [b68f5758d229ece0039fe62054ddeb7c47c90779f3224938cd509a5c38a85cd9] <==
	I0510 17:10:11.651603       1 main.go:301] handling current node
	I0510 17:10:21.650547       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:10:21.650598       1 main.go:301] handling current node
	I0510 17:10:31.646063       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:10:31.646133       1 main.go:301] handling current node
	I0510 17:10:41.645882       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:10:41.645917       1 main.go:301] handling current node
	I0510 17:10:51.648544       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:10:51.648602       1 main.go:301] handling current node
	I0510 17:11:01.645603       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:11:01.645640       1 main.go:301] handling current node
	I0510 17:11:11.651583       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:11:11.651631       1 main.go:301] handling current node
	I0510 17:11:21.647517       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:11:21.647566       1 main.go:301] handling current node
	I0510 17:11:31.645074       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:11:31.645129       1 main.go:301] handling current node
	I0510 17:11:41.645183       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:11:41.645232       1 main.go:301] handling current node
	I0510 17:11:51.647516       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:11:51.647565       1 main.go:301] handling current node
	I0510 17:12:01.645352       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:12:01.645389       1 main.go:301] handling current node
	I0510 17:12:11.651549       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:12:11.651587       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6471b9d7617bf5433e4f0daaece1f4915cb330eb85fdf4e0cb3c343d71412587] <==
	I0510 17:08:40.645272       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0510 17:08:40.747708       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0510 17:08:41.464062       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0510 17:08:42.103972       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0510 17:08:42.193157       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0510 17:08:42.236018       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0510 17:08:42.240611       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0510 17:08:44.216017       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0510 17:08:44.264564       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0510 17:08:44.315490       1 controller.go:667] quota admission added evaluator for: endpoints
	I0510 17:08:44.369275       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:08:44.374535       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:00.980570       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:00.983440       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.62.155"}
	I0510 17:09:04.386521       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:05.053521       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.211.229"}
	I0510 17:09:05.054668       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:15.239845       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.47.90"}
	I0510 17:09:15.240753       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:16.894017       1 controller.go:667] quota admission added evaluator for: namespaces
	I0510 17:09:17.080334       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.6.195"}
	I0510 17:09:17.084105       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:17.159466       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.76.37"}
	I0510 17:09:18.484531       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.96.210.62"}
	I0510 17:09:18.485190       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [3bfba7be3e9f996d96db7394046635e3253dfecd4da4ed607987d9ab5c5045c1] <==
	I0510 17:08:10.021052       1 shared_informer.go:357] "Caches are synced" controller="HPA"
	I0510 17:08:10.037710       1 shared_informer.go:357] "Caches are synced" controller="PVC protection"
	I0510 17:08:10.043296       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0510 17:08:10.044486       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0510 17:08:10.045673       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0510 17:08:10.066802       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0510 17:08:10.066848       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	I0510 17:08:10.069125       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 17:08:10.139043       1 shared_informer.go:357] "Caches are synced" controller="crt configmap"
	I0510 17:08:10.141288       1 shared_informer.go:357] "Caches are synced" controller="ReplicationController"
	I0510 17:08:10.165800       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0510 17:08:10.174775       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0510 17:08:10.174899       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0510 17:08:10.174984       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-914764"
	I0510 17:08:10.175032       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0510 17:08:10.222432       1 shared_informer.go:357] "Caches are synced" controller="bootstrap_signer"
	I0510 17:08:10.288219       1 shared_informer.go:357] "Caches are synced" controller="service-cidr-controller"
	I0510 17:08:10.316954       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0510 17:08:10.320641       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:08:10.324531       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:08:10.365651       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0510 17:08:10.732906       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:08:10.816644       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:08:10.816672       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 17:08:10.816682       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [636c2672ef1fea08eeafc7079d9aa4d4733f3c056ec93159d244a784a22e43da] <==
	I0510 17:08:43.876216       1 shared_informer.go:357] "Caches are synced" controller="ephemeral"
	I0510 17:08:43.955562       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrapproving"
	I0510 17:08:43.962641       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0510 17:08:43.963797       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0510 17:08:43.963858       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0510 17:08:43.963972       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0510 17:08:44.050452       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0510 17:08:44.071645       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 17:08:44.095600       1 shared_informer.go:357] "Caches are synced" controller="daemon sets"
	I0510 17:08:44.155234       1 shared_informer.go:357] "Caches are synced" controller="crt configmap"
	I0510 17:08:44.157536       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 17:08:44.167983       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:08:44.187985       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:08:44.212501       1 shared_informer.go:357] "Caches are synced" controller="bootstrap_signer"
	I0510 17:08:44.584724       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:08:44.593992       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:08:44.594020       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 17:08:44.594034       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0510 17:09:16.949269       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.954705       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.959076       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.964078       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.964177       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.968771       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.972684       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [6838fa75831e0f1aa21700386c517d35bbaca85b8849dba5650f9e4d0cfa7a3b] <==
	I0510 17:08:03.048021       1 server_linux.go:63] "Using iptables proxy"
	E0510 17:08:07.068328       1 server.go:704] "Failed to retrieve node info" err="nodes \"functional-914764\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]"
	I0510 17:08:08.145545       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0510 17:08:08.145631       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:08:08.169501       1 server.go:254] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0510 17:08:08.169578       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:08:08.175554       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:08:08.176045       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:08:08.176078       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:08:08.177781       1 config.go:199] "Starting service config controller"
	I0510 17:08:08.177809       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:08:08.177825       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:08:08.177841       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:08:08.177847       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:08:08.177825       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:08:08.177888       1 config.go:329] "Starting node config controller"
	I0510 17:08:08.178550       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:08:08.278031       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:08:08.278045       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 17:08:08.278074       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:08:08.279352       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [b7b96114d08323f0c0749fe7ebd64d1df4f6a61b2525f53d2a5de31fd7d263f1] <==
	I0510 17:08:41.170422       1 server_linux.go:63] "Using iptables proxy"
	I0510 17:08:41.291189       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0510 17:08:41.291250       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:08:41.312036       1 server.go:254] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0510 17:08:41.312087       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:08:41.316467       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:08:41.316877       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:08:41.316900       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:08:41.318020       1 config.go:199] "Starting service config controller"
	I0510 17:08:41.318038       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:08:41.318058       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:08:41.318057       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:08:41.318109       1 config.go:329] "Starting node config controller"
	I0510 17:08:41.318128       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:08:41.318169       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:08:41.318225       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:08:41.419146       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 17:08:41.419344       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:08:41.419365       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:08:41.419405       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [141001f7a575a10c1de330a29ea37d300d597392e874c6a7c33bd009a9651034] <==
	I0510 17:08:38.434175       1 serving.go:386] Generated self-signed cert in-memory
	W0510 17:08:40.484683       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0510 17:08:40.484841       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0510 17:08:40.484923       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0510 17:08:40.484964       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0510 17:08:40.560996       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 17:08:40.561024       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:08:40.563429       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:08:40.563494       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:08:40.564819       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 17:08:40.565046       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 17:08:40.664548       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [492f1e9244ec13fb014809793c1c73610cc391bde33313e524517243557fcd3c] <==
	I0510 17:08:04.760269       1 serving.go:386] Generated self-signed cert in-memory
	W0510 17:08:07.065829       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0510 17:08:07.065931       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0510 17:08:07.066005       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0510 17:08:07.066051       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0510 17:08:07.252411       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 17:08:07.252458       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:08:07.256227       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 17:08:07.256348       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:08:07.256956       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:08:07.256379       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 17:08:07.357515       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:08:26.967154       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0510 17:08:26.967309       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0510 17:08:26.967454       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 10 17:11:36 functional-914764 kubelet[5289]: E0510 17:11:36.793585    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-9c2c1b285f992cc4cb2312a150f454fc505af4db9cb9e2c6bdd6f68d6faf39d3: Error finding container 9c2c1b285f992cc4cb2312a150f454fc505af4db9cb9e2c6bdd6f68d6faf39d3: Status 404 returned error can't find the container with id 9c2c1b285f992cc4cb2312a150f454fc505af4db9cb9e2c6bdd6f68d6faf39d3
	May 10 17:11:36 functional-914764 kubelet[5289]: E0510 17:11:36.793882    5289 manager.go:1116] Failed to create existing container: /crio-3f4bbb3dd1065712bea54b3ee29d89db4ccec0bdf4fa0c893a037b9c188525db: Error finding container 3f4bbb3dd1065712bea54b3ee29d89db4ccec0bdf4fa0c893a037b9c188525db: Status 404 returned error can't find the container with id 3f4bbb3dd1065712bea54b3ee29d89db4ccec0bdf4fa0c893a037b9c188525db
	May 10 17:11:36 functional-914764 kubelet[5289]: E0510 17:11:36.794087    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-c43d8cda8745382d4c12fa7b076d17572b5e7206a883f07311d28f1d52f39fed: Error finding container c43d8cda8745382d4c12fa7b076d17572b5e7206a883f07311d28f1d52f39fed: Status 404 returned error can't find the container with id c43d8cda8745382d4c12fa7b076d17572b5e7206a883f07311d28f1d52f39fed
	May 10 17:11:36 functional-914764 kubelet[5289]: E0510 17:11:36.794250    5289 manager.go:1116] Failed to create existing container: /crio-c43d8cda8745382d4c12fa7b076d17572b5e7206a883f07311d28f1d52f39fed: Error finding container c43d8cda8745382d4c12fa7b076d17572b5e7206a883f07311d28f1d52f39fed: Status 404 returned error can't find the container with id c43d8cda8745382d4c12fa7b076d17572b5e7206a883f07311d28f1d52f39fed
	May 10 17:11:36 functional-914764 kubelet[5289]: E0510 17:11:36.794389    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-0af14ac9c03a43e1354e75b0ef7783e2a4f8468e626422b232f8f3f40405c199: Error finding container 0af14ac9c03a43e1354e75b0ef7783e2a4f8468e626422b232f8f3f40405c199: Status 404 returned error can't find the container with id 0af14ac9c03a43e1354e75b0ef7783e2a4f8468e626422b232f8f3f40405c199
	May 10 17:11:36 functional-914764 kubelet[5289]: E0510 17:11:36.794556    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-de7832d34774b2eac76e97a8bdd51323a2d5328a16cb8f9474da47c36c123945: Error finding container de7832d34774b2eac76e97a8bdd51323a2d5328a16cb8f9474da47c36c123945: Status 404 returned error can't find the container with id de7832d34774b2eac76e97a8bdd51323a2d5328a16cb8f9474da47c36c123945
	May 10 17:11:36 functional-914764 kubelet[5289]: E0510 17:11:36.794699    5289 manager.go:1116] Failed to create existing container: /crio-e66750980305125d85dfd8c89bf2f42a7b7cd69ff38cd695017d3cecec46b744: Error finding container e66750980305125d85dfd8c89bf2f42a7b7cd69ff38cd695017d3cecec46b744: Status 404 returned error can't find the container with id e66750980305125d85dfd8c89bf2f42a7b7cd69ff38cd695017d3cecec46b744
	May 10 17:11:36 functional-914764 kubelet[5289]: E0510 17:11:36.794917    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-37323ca93e4ca94bc78e1014a60661db35194965c286bc811c2d9314d9b9afc2: Error finding container 37323ca93e4ca94bc78e1014a60661db35194965c286bc811c2d9314d9b9afc2: Status 404 returned error can't find the container with id 37323ca93e4ca94bc78e1014a60661db35194965c286bc811c2d9314d9b9afc2
	May 10 17:11:36 functional-914764 kubelet[5289]: E0510 17:11:36.795153    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-e66750980305125d85dfd8c89bf2f42a7b7cd69ff38cd695017d3cecec46b744: Error finding container e66750980305125d85dfd8c89bf2f42a7b7cd69ff38cd695017d3cecec46b744: Status 404 returned error can't find the container with id e66750980305125d85dfd8c89bf2f42a7b7cd69ff38cd695017d3cecec46b744
	May 10 17:11:36 functional-914764 kubelet[5289]: E0510 17:11:36.795313    5289 manager.go:1116] Failed to create existing container: /crio-9c2c1b285f992cc4cb2312a150f454fc505af4db9cb9e2c6bdd6f68d6faf39d3: Error finding container 9c2c1b285f992cc4cb2312a150f454fc505af4db9cb9e2c6bdd6f68d6faf39d3: Status 404 returned error can't find the container with id 9c2c1b285f992cc4cb2312a150f454fc505af4db9cb9e2c6bdd6f68d6faf39d3
	May 10 17:11:36 functional-914764 kubelet[5289]: E0510 17:11:36.795492    5289 manager.go:1116] Failed to create existing container: /crio-b6887f1bec22c78a5398af50959b4410c0c422063ef74bfbd69990b2349f30ea: Error finding container b6887f1bec22c78a5398af50959b4410c0c422063ef74bfbd69990b2349f30ea: Status 404 returned error can't find the container with id b6887f1bec22c78a5398af50959b4410c0c422063ef74bfbd69990b2349f30ea
	May 10 17:11:36 functional-914764 kubelet[5289]: E0510 17:11:36.795712    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-ec13ac1e0e4da523c114a564c340f288d4a9f379b496ef1f97418f3c68b0f62b: Error finding container ec13ac1e0e4da523c114a564c340f288d4a9f379b496ef1f97418f3c68b0f62b: Status 404 returned error can't find the container with id ec13ac1e0e4da523c114a564c340f288d4a9f379b496ef1f97418f3c68b0f62b
	May 10 17:11:36 functional-914764 kubelet[5289]: E0510 17:11:36.824901    5289 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897096824688044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:187522,},InodesUsed:&UInt64Value{Value:93,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:11:36 functional-914764 kubelet[5289]: E0510 17:11:36.824932    5289 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897096824688044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:187522,},InodesUsed:&UInt64Value{Value:93,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:11:46 functional-914764 kubelet[5289]: E0510 17:11:46.826326    5289 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897106826091140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:187522,},InodesUsed:&UInt64Value{Value:93,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:11:46 functional-914764 kubelet[5289]: E0510 17:11:46.826369    5289 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897106826091140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:187522,},InodesUsed:&UInt64Value{Value:93,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:11:56 functional-914764 kubelet[5289]: E0510 17:11:56.827877    5289 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897116827647554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:187522,},InodesUsed:&UInt64Value{Value:93,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:11:56 functional-914764 kubelet[5289]: E0510 17:11:56.827923    5289 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897116827647554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:187522,},InodesUsed:&UInt64Value{Value:93,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:12:01 functional-914764 kubelet[5289]: E0510 17:12:01.207561    5289 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	May 10 17:12:01 functional-914764 kubelet[5289]: E0510 17:12:01.207637    5289 kuberuntime_image.go:42] "Failed to pull image" err="reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	May 10 17:12:01 functional-914764 kubelet[5289]: E0510 17:12:01.207966    5289 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pp6ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-58ccfd96bb-8pr5j_default(c771ae48-77ae-4678-971a-fb02d978975e): ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	May 10 17:12:01 functional-914764 kubelet[5289]: E0510 17:12:01.209174    5289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-8pr5j" podUID="c771ae48-77ae-4678-971a-fb02d978975e"
	May 10 17:12:01 functional-914764 kubelet[5289]: E0510 17:12:01.225288    5289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-8pr5j" podUID="c771ae48-77ae-4678-971a-fb02d978975e"
	May 10 17:12:06 functional-914764 kubelet[5289]: E0510 17:12:06.829583    5289 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897126829411158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:187522,},InodesUsed:&UInt64Value{Value:93,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:12:06 functional-914764 kubelet[5289]: E0510 17:12:06.829618    5289 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897126829411158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:187522,},InodesUsed:&UInt64Value{Value:93,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b49024e3234d94289dd8c47d384de72c28b1c82feee53d59246f24a29071365a] <==
	I0510 17:08:16.413888       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0510 17:08:16.421502       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0510 17:08:16.421544       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0510 17:08:16.423609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:08:19.878977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:08:24.138806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cdd649f2b2a77111f55c977d96d98353d1c743b9d526a9f779c982ca9e23bed6] <==
	W0510 17:11:51.244265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:11:53.247884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:11:53.252954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:11:55.255556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:11:55.259516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:11:57.262521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:11:57.267660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:11:59.270661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:11:59.275997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:01.278404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:01.283596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:03.287746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:03.291791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:05.295311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:05.300459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:07.303948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:07.308480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:09.311927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:09.317063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:11.320101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:11.324589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:13.327447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:13.331333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:15.334657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:12:15.338643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-914764 -n functional-914764
helpers_test.go:261: (dbg) Run:  kubectl --context functional-914764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-8pr5j nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-rdnm2 kubernetes-dashboard-7779f9b69b-46hh4
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-914764 describe pod busybox-mount mysql-58ccfd96bb-8pr5j nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-rdnm2 kubernetes-dashboard-7779f9b69b-46hh4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-914764 describe pod busybox-mount mysql-58ccfd96bb-8pr5j nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-rdnm2 kubernetes-dashboard-7779f9b69b-46hh4: exit status 1 (83.063561ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-914764/192.168.49.2
	Start Time:       Sat, 10 May 2025 17:09:07 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://92a1bf27d6bc4ba59b72bae94c43e1d4bd97d05ed9f91e5e84a11c3abda97a8e
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 10 May 2025 17:09:09 +0000
	      Finished:     Sat, 10 May 2025 17:09:09 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-54pr4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-54pr4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3m8s  default-scheduler  Successfully assigned default/busybox-mount to functional-914764
	  Normal  Pulling    3m7s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m6s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.116s (1.152s including waiting). Image size: 4631262 bytes.
	  Normal  Created    3m6s  kubelet            Created container: mount-munger
	  Normal  Started    3m6s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-8pr5j
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-914764/192.168.49.2
	Start Time:       Sat, 10 May 2025 17:09:18 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pp6ql (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pp6ql:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  2m57s               default-scheduler  Successfully assigned default/mysql-58ccfd96bb-8pr5j to functional-914764
	  Warning  Failed     14s                 kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     14s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    14s                 kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     14s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2s (x2 over 2m57s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-914764/192.168.49.2
	Start Time:       Sat, 10 May 2025 17:09:15 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2hkc4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2hkc4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  3m                 default-scheduler  Successfully assigned default/nginx-svc to functional-914764
	  Warning  Failed     2m1s               kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m1s               kubelet            Error: ErrImagePull
	  Normal   BackOff    2m                 kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    106s (x2 over 3m)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-914764/192.168.49.2
	Start Time:       Sat, 10 May 2025 17:09:13 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qdrjr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-qdrjr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m2s                  default-scheduler  Successfully assigned default/sp-pod to functional-914764
	  Warning  Failed     2m31s                 kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m31s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    2m31s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m31s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m18s (x2 over 3m2s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-rdnm2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-46hh4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-914764 describe pod busybox-mount mysql-58ccfd96bb-8pr5j nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-rdnm2 kubernetes-dashboard-7779f9b69b-46hh4: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (187.98s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-914764 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-8pr5j" [c771ae48-77ae-4678-971a-fb02d978975e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0510 17:09:47.130054  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:12:03.265346  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1816: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1816: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-914764 -n functional-914764
functional_test.go:1816: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-05-10 17:19:18.815564254 +0000 UTC m=+1498.497351408
functional_test.go:1816: (dbg) Run:  kubectl --context functional-914764 describe po mysql-58ccfd96bb-8pr5j -n default
functional_test.go:1816: (dbg) kubectl --context functional-914764 describe po mysql-58ccfd96bb-8pr5j -n default:
Name:             mysql-58ccfd96bb-8pr5j
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-914764/192.168.49.2
Start Time:       Sat, 10 May 2025 17:09:18 +0000
Labels:           app=mysql
pod-template-hash=58ccfd96bb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/mysql-58ccfd96bb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pp6ql (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-pp6ql:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-58ccfd96bb-8pr5j to functional-914764
Warning  Failed     3m43s (x2 over 7m17s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    3m30s (x2 over 7m17s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     3m30s (x2 over 7m17s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    3m19s (x3 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     9s (x3 over 7m17s)     kubelet            Error: ErrImagePull
Warning  Failed     9s                     kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
functional_test.go:1816: (dbg) Run:  kubectl --context functional-914764 logs mysql-58ccfd96bb-8pr5j -n default
functional_test.go:1816: (dbg) Non-zero exit: kubectl --context functional-914764 logs mysql-58ccfd96bb-8pr5j -n default: exit status 1 (67.948339ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-58ccfd96bb-8pr5j" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1816: kubectl --context functional-914764 logs mysql-58ccfd96bb-8pr5j -n default: exit status 1
functional_test.go:1818: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-914764
helpers_test.go:235: (dbg) docker inspect functional-914764:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee",
	        "Created": "2025-05-10T17:06:49.422708893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 754053,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-05-10T17:06:49.453322641Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e9e814e304601d171cd7a05fe946703c6fbd63c3e77415c5bcfe31c3cddbbe5f",
	        "ResolvConfPath": "/var/lib/docker/containers/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/hostname",
	        "HostsPath": "/var/lib/docker/containers/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/hosts",
	        "LogPath": "/var/lib/docker/containers/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee-json.log",
	        "Name": "/functional-914764",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-914764:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-914764",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee",
	                "LowerDir": "/var/lib/docker/overlay2/a65c263392192e974528fcb4ab1d1977dd7dbb93115efe2211b9afab4e57d5bf-init/diff:/var/lib/docker/overlay2/d562a19931b28d74981554e3e67ffc7804c8c483ec96f024e40ef2be1bf23f73/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a65c263392192e974528fcb4ab1d1977dd7dbb93115efe2211b9afab4e57d5bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a65c263392192e974528fcb4ab1d1977dd7dbb93115efe2211b9afab4e57d5bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a65c263392192e974528fcb4ab1d1977dd7dbb93115efe2211b9afab4e57d5bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-914764",
	                "Source": "/var/lib/docker/volumes/functional-914764/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-914764",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-914764",
	                "name.minikube.sigs.k8s.io": "functional-914764",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7e2d38dbb98458e30a706f6d63dc138fab1cec70f2a44b374b988cafd346778a",
	            "SandboxKey": "/var/run/docker/netns/7e2d38dbb984",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-914764": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:f9:fd:52:52:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d0ace7f37eb64a81ed1332813173719935e5f7095abb26b25ddc6868822634c8",
	                    "EndpointID": "eaa24ac01cabdc03b95e86296753da9c998d67e4511d7d1bf8452d31f81aba08",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-914764",
	                        "64f37bce315f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-914764 -n functional-914764
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-914764 logs -n 25: (1.380237652s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                  Args                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service        | functional-914764 service              | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | hello-node --url                       |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh findmnt          | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | -T /mount2                             |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                     | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC |                     |
	|                | -p functional-914764                   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                 |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh findmnt          | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | -T /mount3                             |                   |         |         |                     |                     |
	| mount          | -p functional-914764                   | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC |                     |
	|                | --kill=true                            |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh sudo cat         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | /etc/ssl/certs/729815.pem              |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh sudo cat         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | /usr/share/ca-certificates/729815.pem  |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh sudo cat         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | /etc/ssl/certs/51391683.0              |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh sudo cat         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | /etc/ssl/certs/7298152.pem             |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh sudo cat         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | /usr/share/ca-certificates/7298152.pem |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh sudo cat         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0              |                   |         |         |                     |                     |
	| addons         | functional-914764 addons list          | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	| addons         | functional-914764 addons list          | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | -o json                                |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh sudo cat         | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:09 UTC | 10 May 25 17:09 UTC |
	|                | /etc/test/nested/copy/729815/hosts     |                   |         |         |                     |                     |
	| service        | functional-914764 service              | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | hello-node-connect --url               |                   |         |         |                     |                     |
	| image          | functional-914764                      | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | image ls --format short                |                   |         |         |                     |                     |
	|                | --alsologtostderr                      |                   |         |         |                     |                     |
	| image          | functional-914764                      | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | image ls --format json                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                      |                   |         |         |                     |                     |
	| image          | functional-914764                      | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | image ls --format table                |                   |         |         |                     |                     |
	|                | --alsologtostderr                      |                   |         |         |                     |                     |
	| image          | functional-914764                      | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | image ls --format yaml                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                      |                   |         |         |                     |                     |
	| ssh            | functional-914764 ssh pgrep            | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC |                     |
	|                | buildkitd                              |                   |         |         |                     |                     |
	| image          | functional-914764 image build -t       | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | localhost/my-image:functional-914764   |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr       |                   |         |         |                     |                     |
	| image          | functional-914764 image ls             | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	| update-context | functional-914764                      | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | update-context                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                 |                   |         |         |                     |                     |
	| update-context | functional-914764                      | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | update-context                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                 |                   |         |         |                     |                     |
	| update-context | functional-914764                      | functional-914764 | jenkins | v1.35.0 | 10 May 25 17:12 UTC | 10 May 25 17:12 UTC |
	|                | update-context                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                 |                   |         |         |                     |                     |
	|----------------|----------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:09:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:09:07.036160  765023 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:09:07.036269  765023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:09:07.036279  765023 out.go:358] Setting ErrFile to fd 2...
	I0510 17:09:07.036293  765023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:09:07.036609  765023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:09:07.037215  765023 out.go:352] Setting JSON to false
	I0510 17:09:07.038373  765023 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10294,"bootTime":1746886653,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:09:07.038452  765023 start.go:140] virtualization: kvm guest
	I0510 17:09:07.040824  765023 out.go:177] * [functional-914764] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:09:07.042266  765023 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:09:07.042259  765023 notify.go:220] Checking for updates...
	I0510 17:09:07.044180  765023 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:09:07.045712  765023 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:09:07.047257  765023 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 17:09:07.048582  765023 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:09:07.061178  765023 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:09:06.994925  764930 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:09:06.995691  764930 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:09:07.020812  764930 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 17:09:07.020967  764930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:09:07.093709  764930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-05-10 17:09:07.082073403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:09:07.093867  764930 docker.go:318] overlay module found
	I0510 17:09:07.096931  764930 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0510 17:09:07.098502  764930 start.go:304] selected driver: docker
	I0510 17:09:07.098523  764930 start.go:908] validating driver "docker" against &{Name:functional-914764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-914764 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:09:07.098633  764930 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:09:07.101636  764930 out.go:201] 
	W0510 17:09:07.103269  764930 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0510 17:09:07.104409  764930 out.go:201] 
	I0510 17:09:07.064612  765023 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:09:07.065328  765023 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:09:07.094773  765023 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 17:09:07.094882  765023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:09:07.193650  765023 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-05-10 17:09:07.175472465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:09:07.193762  765023 docker.go:318] overlay module found
	I0510 17:09:07.196248  765023 out.go:177] * Using the docker driver based on existing profile
	I0510 17:09:07.197931  765023 start.go:304] selected driver: docker
	I0510 17:09:07.197953  765023 start.go:908] validating driver "docker" against &{Name:functional-914764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-914764 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:09:07.198064  765023 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:09:07.198176  765023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:09:07.269866  765023 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-05-10 17:09:07.252364205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:09:07.270735  765023 cni.go:84] Creating CNI manager for ""
	I0510 17:09:07.270821  765023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 17:09:07.270900  765023 start.go:347] cluster config:
	{Name:functional-914764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-914764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:09:07.273942  765023 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	May 10 17:17:07 functional-914764 crio[4926]: time="2025-05-10 17:17:07.138999610Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 17:17:21 functional-914764 crio[4926]: time="2025-05-10 17:17:21.676416441Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=0ef02cae-8858-4286-b582-d0c20ea95da2 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:17:21 functional-914764 crio[4926]: time="2025-05-10 17:17:21.676694731Z" level=info msg="Image docker.io/nginx:alpine not found" id=0ef02cae-8858-4286-b582-d0c20ea95da2 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:17:36 functional-914764 crio[4926]: time="2025-05-10 17:17:36.675869759Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=bd656db1-2b6b-4dc4-850a-39a386d4dd80 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:17:36 functional-914764 crio[4926]: time="2025-05-10 17:17:36.676210049Z" level=info msg="Image docker.io/nginx:alpine not found" id=bd656db1-2b6b-4dc4-850a-39a386d4dd80 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:17:37 functional-914764 crio[4926]: time="2025-05-10 17:17:37.807873966Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=ce2308a3-06ce-4f47-b741-f2fd9caff235 name=/runtime.v1.ImageService/PullImage
	May 10 17:17:37 functional-914764 crio[4926]: time="2025-05-10 17:17:37.812018954Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	May 10 17:17:50 functional-914764 crio[4926]: time="2025-05-10 17:17:50.675749556Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=98f4984c-b162-4f10-930b-5013e044b688 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:17:50 functional-914764 crio[4926]: time="2025-05-10 17:17:50.676023969Z" level=info msg="Image docker.io/nginx:alpine not found" id=98f4984c-b162-4f10-930b-5013e044b688 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:17:53 functional-914764 crio[4926]: time="2025-05-10 17:17:53.676353302Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=260278a5-b973-4d3b-89d6-2368ac4e7f8b name=/runtime.v1.ImageService/ImageStatus
	May 10 17:17:53 functional-914764 crio[4926]: time="2025-05-10 17:17:53.676632267Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=260278a5-b973-4d3b-89d6-2368ac4e7f8b name=/runtime.v1.ImageService/ImageStatus
	May 10 17:18:07 functional-914764 crio[4926]: time="2025-05-10 17:18:07.675356796Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=2d3ec9e4-d5bc-4a9e-9f67-d30b916d4c21 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:18:07 functional-914764 crio[4926]: time="2025-05-10 17:18:07.675670731Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=2d3ec9e4-d5bc-4a9e-9f67-d30b916d4c21 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:18:08 functional-914764 crio[4926]: time="2025-05-10 17:18:08.428082511Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=291c6ad3-7e83-4f41-b80e-d9a2ece97f7b name=/runtime.v1.ImageService/PullImage
	May 10 17:18:08 functional-914764 crio[4926]: time="2025-05-10 17:18:08.432230549Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	May 10 17:18:19 functional-914764 crio[4926]: time="2025-05-10 17:18:19.676176914Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=863b2baf-e09d-4e4b-a96c-b8215c86deea name=/runtime.v1.ImageService/ImageStatus
	May 10 17:18:19 functional-914764 crio[4926]: time="2025-05-10 17:18:19.676430011Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=863b2baf-e09d-4e4b-a96c-b8215c86deea name=/runtime.v1.ImageService/ImageStatus
	May 10 17:18:23 functional-914764 crio[4926]: time="2025-05-10 17:18:23.676087670Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=96a52de8-fe7a-487b-a3c5-fb1046a23e29 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:18:23 functional-914764 crio[4926]: time="2025-05-10 17:18:23.676359569Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=96a52de8-fe7a-487b-a3c5-fb1046a23e29 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:18:34 functional-914764 crio[4926]: time="2025-05-10 17:18:34.676024046Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=121d767d-74f5-417b-a856-6a2d4786f723 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:18:34 functional-914764 crio[4926]: time="2025-05-10 17:18:34.676356636Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=121d767d-74f5-417b-a856-6a2d4786f723 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:18:49 functional-914764 crio[4926]: time="2025-05-10 17:18:49.676228390Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=1db60741-d6f1-49f8-9b1b-52a58b096054 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:18:49 functional-914764 crio[4926]: time="2025-05-10 17:18:49.676518584Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=1db60741-d6f1-49f8-9b1b-52a58b096054 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:19:09 functional-914764 crio[4926]: time="2025-05-10 17:19:09.534997762Z" level=info msg="Pulling image: docker.io/nginx:latest" id=6e1d29c2-935f-4cc8-b240-3343a83c53a9 name=/runtime.v1.ImageService/PullImage
	May 10 17:19:09 functional-914764 crio[4926]: time="2025-05-10 17:19:09.539067456Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	67bc1ef3aad71       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                      7 minutes ago       Running             echoserver                0                   5de56d5e621b6       hello-node-connect-58f9cf68d8-qpwnx
	92a1bf27d6bc4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              mount-munger              0                   49250225d3fc2       busybox-mount
	275c4bd8a45f7       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969    10 minutes ago      Running             echoserver                0                   8f4cdec4dfd2e       hello-node-fcfd88b6f-2w246
	adf6f74785797       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                      10 minutes ago      Running             coredns                   2                   c43d8cda87453       coredns-674b8bbfcf-p47zm
	b68f5758d229e       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f                                      10 minutes ago      Running             kindnet-cni               2                   3f4bbb3dd1065       kindnet-zqd22
	b7b96114d0832       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68                                      10 minutes ago      Running             kube-proxy                2                   b6887f1bec22c       kube-proxy-ss4s9
	cdd649f2b2a77       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       3                   ec13ac1e0e4da       storage-provisioner
	6471b9d7617bf       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4                                      10 minutes ago      Running             kube-apiserver            0                   66fd7bd7785a2       kube-apiserver-functional-914764
	636c2672ef1fe       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02                                      10 minutes ago      Running             kube-controller-manager   2                   de7832d34774b       kube-controller-manager-functional-914764
	b8aac10c549f9       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                      10 minutes ago      Running             etcd                      2                   9c2c1b285f992       etcd-functional-914764
	141001f7a575a       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4                                      10 minutes ago      Running             kube-scheduler            2                   e667509803051       kube-scheduler-functional-914764
	b49024e3234d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       2                   ec13ac1e0e4da       storage-provisioner
	492f1e9244ec1       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4                                      11 minutes ago      Exited              kube-scheduler            1                   e667509803051       kube-scheduler-functional-914764
	36b4d81bf219c       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1                                      11 minutes ago      Exited              etcd                      1                   9c2c1b285f992       etcd-functional-914764
	3bfba7be3e9f9       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02                                      11 minutes ago      Exited              kube-controller-manager   1                   de7832d34774b       kube-controller-manager-functional-914764
	02b7ba0b0ae58       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f                                      11 minutes ago      Exited              kindnet-cni               1                   3f4bbb3dd1065       kindnet-zqd22
	a8651e3149e64       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b                                      11 minutes ago      Exited              coredns                   1                   c43d8cda87453       coredns-674b8bbfcf-p47zm
	6838fa75831e0       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68                                      11 minutes ago      Exited              kube-proxy                1                   b6887f1bec22c       kube-proxy-ss4s9
	
	
	==> coredns [a8651e3149e641e440e136f2d840345d8a000c042ee306b881bc8e87050dd071] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:60264 - 38594 "HINFO IN 4214401693363052933.6122617693655002349. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.077168203s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [adf6f74785797d4437b006b2d5407947dc2940b3526b84f3de5897b0796b5dca] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:49829 - 28248 "HINFO IN 4785160874892599986.9043715371217379896. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.468228785s
	
	
	==> describe nodes <==
	Name:               functional-914764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-914764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=functional-914764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T17_07_05_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 17:07:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-914764
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 17:19:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 17:17:52 +0000   Sat, 10 May 2025 17:06:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 17:17:52 +0000   Sat, 10 May 2025 17:06:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 17:17:52 +0000   Sat, 10 May 2025 17:06:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 17:17:52 +0000   Sat, 10 May 2025 17:07:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-914764
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	System Info:
	  Machine ID:                 4efca45a9db948e587520e24f4b8739c
	  System UUID:                c4750a4a-b2ad-455b-869c-3f20a6f4d060
	  Boot ID:                    cf43504f-fb83-4d4b-9ff6-27d975437043
	  Kernel Version:             5.15.0-1081-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-qpwnx           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	  default                     hello-node-fcfd88b6f-2w246                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-58ccfd96bb-8pr5j                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-674b8bbfcf-p47zm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-914764                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-zqd22                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-914764              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-914764     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-ss4s9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-914764              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-rdnm2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-46hh4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-914764 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-914764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-914764 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-914764 event: Registered Node functional-914764 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-914764 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-914764 event: Registered Node functional-914764 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-914764 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-914764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-914764 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-914764 event: Registered Node functional-914764 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +1.002546] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000006] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.003990] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000004] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000000] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +2.011769] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000002] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000003] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000004] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +4.063544] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000009] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000010] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000006] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.003973] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000005] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +8.191083] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-83fa5a3f9003
	[  +0.000005] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000000] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	[  +0.000001] ll header: 00000000: 22 70 c7 11 fe b3 ca 55 b0 c4 64 85 08 00
	
	
	==> etcd [36b4d81bf219cafe7496f77936067c3faf0dce6c9f63dbca8380d99503f20ce4] <==
	{"level":"info","ts":"2025-05-10T17:08:05.060111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T17:08:05.060137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-05-10T17:08:05.060156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:05.060204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:05.060236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:05.060247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:05.061492Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-914764 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:08:05.061526Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:08:05.061679Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:08:05.061717Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:08:05.061511Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:08:05.062405Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:08:05.063096Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:08:05.064203Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-05-10T17:08:05.064997Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:08:26.967923Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-05-10T17:08:26.968019Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"functional-914764","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"info","ts":"2025-05-10T17:08:27.106491Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-05-10T17:08:27.106554Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:08:27.106528Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:08:27.106600Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-05-10T17:08:27.106594Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-05-10T17:08:27.109582Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-05-10T17:08:27.109669Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-05-10T17:08:27.109680Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"functional-914764","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [b8aac10c549f9fe207cb7749c9a80728801747d2b052a742422b2d6428f2c0bd] <==
	{"level":"info","ts":"2025-05-10T17:08:37.646900Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T17:08:37.649119Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-05-10T17:08:37.649524Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-05-10T17:08:37.649580Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-05-10T17:08:37.649705Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-05-10T17:08:37.649749Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-05-10T17:08:39.478686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:39.478738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:39.478774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-05-10T17:08:39.478792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2025-05-10T17:08:39.478834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-05-10T17:08:39.478845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2025-05-10T17:08:39.478868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-05-10T17:08:39.481254Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-914764 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:08:39.481257Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:08:39.481277Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:08:39.481513Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:08:39.481607Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:08:39.482064Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:08:39.482184Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:08:39.482787Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:08:39.482799Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-05-10T17:18:39.496706Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1119}
	{"level":"info","ts":"2025-05-10T17:18:39.516574Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":1119,"took":"19.488777ms","hash":1815830659,"current-db-size-bytes":3809280,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1634304,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-05-10T17:18:39.516628Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1815830659,"revision":1119,"compact-revision":-1}
	
	
	==> kernel <==
	 17:19:20 up  3:01,  0 users,  load average: 0.14, 0.81, 21.71
	Linux functional-914764 5.15.0-1081-gcp #90~20.04.1-Ubuntu SMP Fri Apr 4 18:55:17 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [02b7ba0b0ae587a392ce040f4ae1a585fbf13aeea5d8ef7ca3970bd961801962] <==
	I0510 17:08:03.146176       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0510 17:08:03.146400       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0510 17:08:03.146582       1 main.go:148] setting mtu 1500 for CNI 
	I0510 17:08:03.146603       1 main.go:178] kindnetd IP family: "ipv4"
	I0510 17:08:03.146616       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0510 17:08:03.544905       1 controller.go:361] Starting controller kube-network-policies
	I0510 17:08:03.544933       1 controller.go:365] Waiting for informer caches to sync
	I0510 17:08:03.544940       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	W0510 17:08:07.144823       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "kindnet" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0510 17:08:07.145866       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "kindnet" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0510 17:08:07.145989       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError"
	E0510 17:08:07.145899       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError"
	I0510 17:08:08.545389       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0510 17:08:08.545416       1 metrics.go:61] Registering metrics
	I0510 17:08:08.545470       1 controller.go:401] Syncing nftables rules
	I0510 17:08:13.549407       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:08:13.549466       1 main.go:301] handling current node
	I0510 17:08:23.547496       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:08:23.547546       1 main.go:301] handling current node
	
	
	==> kindnet [b68f5758d229ece0039fe62054ddeb7c47c90779f3224938cd509a5c38a85cd9] <==
	I0510 17:17:11.651583       1 main.go:301] handling current node
	I0510 17:17:21.651540       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:17:21.651578       1 main.go:301] handling current node
	I0510 17:17:31.651528       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:17:31.651563       1 main.go:301] handling current node
	I0510 17:17:41.645380       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:17:41.645425       1 main.go:301] handling current node
	I0510 17:17:51.647544       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:17:51.647585       1 main.go:301] handling current node
	I0510 17:18:01.651303       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:18:01.651349       1 main.go:301] handling current node
	I0510 17:18:11.647550       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:18:11.647597       1 main.go:301] handling current node
	I0510 17:18:21.646153       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:18:21.646195       1 main.go:301] handling current node
	I0510 17:18:31.651495       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:18:31.651541       1 main.go:301] handling current node
	I0510 17:18:41.645460       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:18:41.645507       1 main.go:301] handling current node
	I0510 17:18:51.647525       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:18:51.647594       1 main.go:301] handling current node
	I0510 17:19:01.651530       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:19:01.651571       1 main.go:301] handling current node
	I0510 17:19:11.647539       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0510 17:19:11.647593       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6471b9d7617bf5433e4f0daaece1f4915cb330eb85fdf4e0cb3c343d71412587] <==
	I0510 17:08:42.103972       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0510 17:08:42.193157       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0510 17:08:42.236018       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0510 17:08:42.240611       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0510 17:08:44.216017       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0510 17:08:44.264564       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0510 17:08:44.315490       1 controller.go:667] quota admission added evaluator for: endpoints
	I0510 17:08:44.369275       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:08:44.374535       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:00.980570       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:00.983440       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.62.155"}
	I0510 17:09:04.386521       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:05.053521       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.211.229"}
	I0510 17:09:05.054668       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:15.239845       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.47.90"}
	I0510 17:09:15.240753       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:16.894017       1 controller.go:667] quota admission added evaluator for: namespaces
	I0510 17:09:17.080334       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.6.195"}
	I0510 17:09:17.084105       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:09:17.159466       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.76.37"}
	I0510 17:09:18.484531       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.96.210.62"}
	I0510 17:09:18.485190       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:12:16.052702       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.205.43"}
	I0510 17:12:16.053648       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0510 17:18:40.489428       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [3bfba7be3e9f996d96db7394046635e3253dfecd4da4ed607987d9ab5c5045c1] <==
	I0510 17:08:10.021052       1 shared_informer.go:357] "Caches are synced" controller="HPA"
	I0510 17:08:10.037710       1 shared_informer.go:357] "Caches are synced" controller="PVC protection"
	I0510 17:08:10.043296       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0510 17:08:10.044486       1 shared_informer.go:357] "Caches are synced" controller="deployment"
	I0510 17:08:10.045673       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0510 17:08:10.066802       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0510 17:08:10.066848       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	I0510 17:08:10.069125       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 17:08:10.139043       1 shared_informer.go:357] "Caches are synced" controller="crt configmap"
	I0510 17:08:10.141288       1 shared_informer.go:357] "Caches are synced" controller="ReplicationController"
	I0510 17:08:10.165800       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0510 17:08:10.174775       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0510 17:08:10.174899       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0510 17:08:10.174984       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-914764"
	I0510 17:08:10.175032       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0510 17:08:10.222432       1 shared_informer.go:357] "Caches are synced" controller="bootstrap_signer"
	I0510 17:08:10.288219       1 shared_informer.go:357] "Caches are synced" controller="service-cidr-controller"
	I0510 17:08:10.316954       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0510 17:08:10.320641       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:08:10.324531       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:08:10.365651       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0510 17:08:10.732906       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:08:10.816644       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:08:10.816672       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 17:08:10.816682       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [636c2672ef1fea08eeafc7079d9aa4d4733f3c056ec93159d244a784a22e43da] <==
	I0510 17:08:43.876216       1 shared_informer.go:357] "Caches are synced" controller="ephemeral"
	I0510 17:08:43.955562       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrapproving"
	I0510 17:08:43.962641       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0510 17:08:43.963797       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0510 17:08:43.963858       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0510 17:08:43.963972       1 shared_informer.go:357] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0510 17:08:44.050452       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0510 17:08:44.071645       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0510 17:08:44.095600       1 shared_informer.go:357] "Caches are synced" controller="daemon sets"
	I0510 17:08:44.155234       1 shared_informer.go:357] "Caches are synced" controller="crt configmap"
	I0510 17:08:44.157536       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0510 17:08:44.167983       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:08:44.187985       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0510 17:08:44.212501       1 shared_informer.go:357] "Caches are synced" controller="bootstrap_signer"
	I0510 17:08:44.584724       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:08:44.593992       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0510 17:08:44.594020       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0510 17:08:44.594034       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0510 17:09:16.949269       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.954705       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.959076       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.964078       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.964177       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.968771       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0510 17:09:16.972684       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [6838fa75831e0f1aa21700386c517d35bbaca85b8849dba5650f9e4d0cfa7a3b] <==
	I0510 17:08:03.048021       1 server_linux.go:63] "Using iptables proxy"
	E0510 17:08:07.068328       1 server.go:704] "Failed to retrieve node info" err="nodes \"functional-914764\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]"
	I0510 17:08:08.145545       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0510 17:08:08.145631       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:08:08.169501       1 server.go:254] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0510 17:08:08.169578       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:08:08.175554       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:08:08.176045       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:08:08.176078       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:08:08.177781       1 config.go:199] "Starting service config controller"
	I0510 17:08:08.177809       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:08:08.177825       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:08:08.177841       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:08:08.177847       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:08:08.177825       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:08:08.177888       1 config.go:329] "Starting node config controller"
	I0510 17:08:08.178550       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:08:08.278031       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:08:08.278045       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 17:08:08.278074       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:08:08.279352       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [b7b96114d08323f0c0749fe7ebd64d1df4f6a61b2525f53d2a5de31fd7d263f1] <==
	I0510 17:08:41.170422       1 server_linux.go:63] "Using iptables proxy"
	I0510 17:08:41.291189       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0510 17:08:41.291250       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:08:41.312036       1 server.go:254] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0510 17:08:41.312087       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:08:41.316467       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:08:41.316877       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:08:41.316900       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:08:41.318020       1 config.go:199] "Starting service config controller"
	I0510 17:08:41.318038       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:08:41.318058       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:08:41.318057       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:08:41.318109       1 config.go:329] "Starting node config controller"
	I0510 17:08:41.318128       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:08:41.318169       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:08:41.318225       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:08:41.419146       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 17:08:41.419344       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:08:41.419365       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:08:41.419405       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [141001f7a575a10c1de330a29ea37d300d597392e874c6a7c33bd009a9651034] <==
	I0510 17:08:38.434175       1 serving.go:386] Generated self-signed cert in-memory
	W0510 17:08:40.484683       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0510 17:08:40.484841       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0510 17:08:40.484923       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0510 17:08:40.484964       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0510 17:08:40.560996       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 17:08:40.561024       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:08:40.563429       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:08:40.563494       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:08:40.564819       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 17:08:40.565046       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 17:08:40.664548       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [492f1e9244ec13fb014809793c1c73610cc391bde33313e524517243557fcd3c] <==
	I0510 17:08:04.760269       1 serving.go:386] Generated self-signed cert in-memory
	W0510 17:08:07.065829       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0510 17:08:07.065931       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0510 17:08:07.066005       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0510 17:08:07.066051       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0510 17:08:07.252411       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 17:08:07.252458       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:08:07.256227       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 17:08:07.256348       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:08:07.256956       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:08:07.256379       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 17:08:07.357515       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:08:26.967154       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0510 17:08:26.967309       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0510 17:08:26.967454       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 10 17:18:36 functional-914764 kubelet[5289]: E0510 17:18:36.797937    5289 manager.go:1116] Failed to create existing container: /crio-e66750980305125d85dfd8c89bf2f42a7b7cd69ff38cd695017d3cecec46b744: Error finding container e66750980305125d85dfd8c89bf2f42a7b7cd69ff38cd695017d3cecec46b744: Status 404 returned error can't find the container with id e66750980305125d85dfd8c89bf2f42a7b7cd69ff38cd695017d3cecec46b744
	May 10 17:18:36 functional-914764 kubelet[5289]: E0510 17:18:36.798157    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-9c2c1b285f992cc4cb2312a150f454fc505af4db9cb9e2c6bdd6f68d6faf39d3: Error finding container 9c2c1b285f992cc4cb2312a150f454fc505af4db9cb9e2c6bdd6f68d6faf39d3: Status 404 returned error can't find the container with id 9c2c1b285f992cc4cb2312a150f454fc505af4db9cb9e2c6bdd6f68d6faf39d3
	May 10 17:18:36 functional-914764 kubelet[5289]: E0510 17:18:36.798357    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-37323ca93e4ca94bc78e1014a60661db35194965c286bc811c2d9314d9b9afc2: Error finding container 37323ca93e4ca94bc78e1014a60661db35194965c286bc811c2d9314d9b9afc2: Status 404 returned error can't find the container with id 37323ca93e4ca94bc78e1014a60661db35194965c286bc811c2d9314d9b9afc2
	May 10 17:18:36 functional-914764 kubelet[5289]: E0510 17:18:36.798545    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-e66750980305125d85dfd8c89bf2f42a7b7cd69ff38cd695017d3cecec46b744: Error finding container e66750980305125d85dfd8c89bf2f42a7b7cd69ff38cd695017d3cecec46b744: Status 404 returned error can't find the container with id e66750980305125d85dfd8c89bf2f42a7b7cd69ff38cd695017d3cecec46b744
	May 10 17:18:36 functional-914764 kubelet[5289]: E0510 17:18:36.798703    5289 manager.go:1116] Failed to create existing container: /crio-3f4bbb3dd1065712bea54b3ee29d89db4ccec0bdf4fa0c893a037b9c188525db: Error finding container 3f4bbb3dd1065712bea54b3ee29d89db4ccec0bdf4fa0c893a037b9c188525db: Status 404 returned error can't find the container with id 3f4bbb3dd1065712bea54b3ee29d89db4ccec0bdf4fa0c893a037b9c188525db
	May 10 17:18:36 functional-914764 kubelet[5289]: E0510 17:18:36.798868    5289 manager.go:1116] Failed to create existing container: /crio-b6887f1bec22c78a5398af50959b4410c0c422063ef74bfbd69990b2349f30ea: Error finding container b6887f1bec22c78a5398af50959b4410c0c422063ef74bfbd69990b2349f30ea: Status 404 returned error can't find the container with id b6887f1bec22c78a5398af50959b4410c0c422063ef74bfbd69990b2349f30ea
	May 10 17:18:36 functional-914764 kubelet[5289]: E0510 17:18:36.799038    5289 manager.go:1116] Failed to create existing container: /crio-0af14ac9c03a43e1354e75b0ef7783e2a4f8468e626422b232f8f3f40405c199: Error finding container 0af14ac9c03a43e1354e75b0ef7783e2a4f8468e626422b232f8f3f40405c199: Status 404 returned error can't find the container with id 0af14ac9c03a43e1354e75b0ef7783e2a4f8468e626422b232f8f3f40405c199
	May 10 17:18:36 functional-914764 kubelet[5289]: E0510 17:18:36.799186    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-ec13ac1e0e4da523c114a564c340f288d4a9f379b496ef1f97418f3c68b0f62b: Error finding container ec13ac1e0e4da523c114a564c340f288d4a9f379b496ef1f97418f3c68b0f62b: Status 404 returned error can't find the container with id ec13ac1e0e4da523c114a564c340f288d4a9f379b496ef1f97418f3c68b0f62b
	May 10 17:18:36 functional-914764 kubelet[5289]: E0510 17:18:36.799335    5289 manager.go:1116] Failed to create existing container: /docker/64f37bce315f5a034cbea4bfb9c09d071d62107b3bff4ff5457ed2915a6228ee/crio-b6887f1bec22c78a5398af50959b4410c0c422063ef74bfbd69990b2349f30ea: Error finding container b6887f1bec22c78a5398af50959b4410c0c422063ef74bfbd69990b2349f30ea: Status 404 returned error can't find the container with id b6887f1bec22c78a5398af50959b4410c0c422063ef74bfbd69990b2349f30ea
	May 10 17:18:36 functional-914764 kubelet[5289]: E0510 17:18:36.799538    5289 manager.go:1116] Failed to create existing container: /crio-9c2c1b285f992cc4cb2312a150f454fc505af4db9cb9e2c6bdd6f68d6faf39d3: Error finding container 9c2c1b285f992cc4cb2312a150f454fc505af4db9cb9e2c6bdd6f68d6faf39d3: Status 404 returned error can't find the container with id 9c2c1b285f992cc4cb2312a150f454fc505af4db9cb9e2c6bdd6f68d6faf39d3
	May 10 17:18:36 functional-914764 kubelet[5289]: E0510 17:18:36.799697    5289 manager.go:1116] Failed to create existing container: /crio-c43d8cda8745382d4c12fa7b076d17572b5e7206a883f07311d28f1d52f39fed: Error finding container c43d8cda8745382d4c12fa7b076d17572b5e7206a883f07311d28f1d52f39fed: Status 404 returned error can't find the container with id c43d8cda8745382d4c12fa7b076d17572b5e7206a883f07311d28f1d52f39fed
	May 10 17:18:36 functional-914764 kubelet[5289]: E0510 17:18:36.905991    5289 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897516905689903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:18:36 functional-914764 kubelet[5289]: E0510 17:18:36.906030    5289 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897516905689903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:18:46 functional-914764 kubelet[5289]: E0510 17:18:46.907768    5289 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897526907503424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:18:46 functional-914764 kubelet[5289]: E0510 17:18:46.907818    5289 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897526907503424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:18:56 functional-914764 kubelet[5289]: E0510 17:18:56.909605    5289 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897536909334096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:18:56 functional-914764 kubelet[5289]: E0510 17:18:56.909657    5289 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897536909334096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:19:06 functional-914764 kubelet[5289]: E0510 17:19:06.911826    5289 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897546911558643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:19:06 functional-914764 kubelet[5289]: E0510 17:19:06.911881    5289 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897546911558643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:19:09 functional-914764 kubelet[5289]: E0510 17:19:09.534521    5289 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	May 10 17:19:09 functional-914764 kubelet[5289]: E0510 17:19:09.534597    5289 kuberuntime_image.go:42] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	May 10 17:19:09 functional-914764 kubelet[5289]: E0510 17:19:09.534889    5289 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pp6ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-58ccfd96bb-8pr5j_default(c771ae48-77ae-4678-971a-fb02d978975e): ErrImagePull: loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	May 10 17:19:09 functional-914764 kubelet[5289]: E0510 17:19:09.536159    5289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-8pr5j" podUID="c771ae48-77ae-4678-971a-fb02d978975e"
	May 10 17:19:16 functional-914764 kubelet[5289]: E0510 17:19:16.913957    5289 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897556913633630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 17:19:16 functional-914764 kubelet[5289]: E0510 17:19:16.913997    5289 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746897556913633630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:212371,},InodesUsed:&UInt64Value{Value:109,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b49024e3234d94289dd8c47d384de72c28b1c82feee53d59246f24a29071365a] <==
	I0510 17:08:16.413888       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0510 17:08:16.421502       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0510 17:08:16.421544       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0510 17:08:16.423609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:08:19.878977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:08:24.138806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cdd649f2b2a77111f55c977d96d98353d1c743b9d526a9f779c982ca9e23bed6] <==
	W0510 17:18:54.866360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:18:56.869469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:18:56.874237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:18:58.876910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:18:58.880957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:00.884631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:00.888461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:02.890944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:02.894921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:04.898925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:04.902921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:06.905726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:06.911061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:08.914401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:08.918213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:10.920987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:10.924915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:12.928067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:12.933360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:14.937376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:14.941509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:16.944440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:16.949771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:18.952391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 17:19:18.956509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-914764 -n functional-914764
helpers_test.go:261: (dbg) Run:  kubectl --context functional-914764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-8pr5j nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-rdnm2 kubernetes-dashboard-7779f9b69b-46hh4
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-914764 describe pod busybox-mount mysql-58ccfd96bb-8pr5j nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-rdnm2 kubernetes-dashboard-7779f9b69b-46hh4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-914764 describe pod busybox-mount mysql-58ccfd96bb-8pr5j nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-rdnm2 kubernetes-dashboard-7779f9b69b-46hh4: exit status 1 (84.404775ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-914764/192.168.49.2
	Start Time:       Sat, 10 May 2025 17:09:07 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://92a1bf27d6bc4ba59b72bae94c43e1d4bd97d05ed9f91e5e84a11c3abda97a8e
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 10 May 2025 17:09:09 +0000
	      Finished:     Sat, 10 May 2025 17:09:09 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-54pr4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-54pr4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-914764
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.116s (1.152s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-8pr5j
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-914764/192.168.49.2
	Start Time:       Sat, 10 May 2025 17:09:18 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pp6ql (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pp6ql:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-58ccfd96bb-8pr5j to functional-914764
	  Warning  Failed     3m46s (x2 over 7m20s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    3m33s (x2 over 7m20s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     3m33s (x2 over 7m20s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    3m22s (x3 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     12s (x3 over 7m20s)    kubelet            Error: ErrImagePull
	  Warning  Failed     12s                    kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-914764/192.168.49.2
	Start Time:       Sat, 10 May 2025 17:09:15 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2hkc4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2hkc4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/nginx-svc to functional-914764
	  Warning  Failed     5m49s (x2 over 9m7s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m14s (x3 over 9m7s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m14s                 kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:62223d644fa234c3a1cc785ee14242ec47a77364226f1c811d2f669f96dc2ac8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    105s (x4 over 9m6s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     105s (x4 over 9m6s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    91s (x4 over 10m)     kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-914764/192.168.49.2
	Start Time:       Sat, 10 May 2025 17:09:13 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qdrjr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-qdrjr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/sp-pod to functional-914764
	  Warning  Failed     6m19s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:88b3388ea06c7262e410a3ab5c05dc4088b7b39dea569addd8c30766f4f47440 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m16s (x2 over 9m37s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m16s (x3 over 9m37s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2m38s (x5 over 9m37s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m38s (x5 over 9m37s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m24s (x4 over 10m)    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-rdnm2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-46hh4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-914764 describe pod busybox-mount mysql-58ccfd96bb-8pr5j nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-rdnm2 kubernetes-dashboard-7779f9b69b-46hh4: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-914764 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [57253c8a-3401-46d1-bcfd-f34e9be17cbf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-914764 -n functional-914764
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-05-10 17:13:15.535543077 +0000 UTC m=+1135.217330205
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-914764 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-914764 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-914764/192.168.49.2
Start Time:       Sat, 10 May 2025 17:09:15 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2hkc4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2hkc4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  4m                  default-scheduler  Successfully assigned default/nginx-svc to functional-914764
Warning  Failed     3m1s                kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m1s                kubelet            Error: ErrImagePull
Normal   BackOff    3m                  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     3m                  kubelet            Error: ImagePullBackOff
Normal   Pulling    2m46s (x2 over 4m)  kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-914764 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-914764 logs nginx-svc -n default: exit status 1 (62.765528ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-914764 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (102.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0510 17:13:15.661873  729815 retry.go:31] will retry after 3.288783356s: Temporary Error: Get "http:": http: no Host in request URL
I0510 17:13:18.950852  729815 retry.go:31] will retry after 4.840022068s: Temporary Error: Get "http:": http: no Host in request URL
I0510 17:13:23.792081  729815 retry.go:31] will retry after 8.307956944s: Temporary Error: Get "http:": http: no Host in request URL
I0510 17:13:32.100615  729815 retry.go:31] will retry after 12.926518202s: Temporary Error: Get "http:": http: no Host in request URL
I0510 17:13:45.028344  729815 retry.go:31] will retry after 11.461379152s: Temporary Error: Get "http:": http: no Host in request URL
I0510 17:13:56.490568  729815 retry.go:31] will retry after 25.632833098s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-914764 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
nginx-svc   LoadBalancer   10.103.47.90   10.103.47.90   80:32101/TCP   5m43s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (102.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-cmxkz" [c123f562-4744-4a16-98d1-fce9d4f44d5c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-256321 -n embed-certs-256321
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-05-10 18:00:56.531833672 +0000 UTC m=+3996.213620797
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-256321 describe po kubernetes-dashboard-7779f9b69b-cmxkz -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context embed-certs-256321 describe po kubernetes-dashboard-7779f9b69b-cmxkz -n kubernetes-dashboard:
Name:             kubernetes-dashboard-7779f9b69b-cmxkz
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-256321/192.168.76.2
Start Time:       Sat, 10 May 2025 17:51:25 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=7779f9b69b
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-7779f9b69b
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6zlkm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-6zlkm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m31s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-cmxkz to embed-certs-256321
Warning  Failed     6m2s                    kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    4m41s (x5 over 9m31s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     4m11s (x4 over 8m57s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m11s (x5 over 8m57s)   kubelet            Error: ErrImagePull
Warning  Failed     2m43s (x16 over 8m57s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    99s (x21 over 8m57s)    kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-256321 logs kubernetes-dashboard-7779f9b69b-cmxkz -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context embed-certs-256321 logs kubernetes-dashboard-7779f9b69b-cmxkz -n kubernetes-dashboard: exit status 1 (70.601019ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-7779f9b69b-cmxkz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context embed-certs-256321 logs kubernetes-dashboard-7779f9b69b-cmxkz -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-256321
helpers_test.go:235: (dbg) docker inspect embed-certs-256321:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "63ef81e147639f8a2c8ea835891fc2be0a5e82d2e68596f6895399d4134dc3dc",
	        "Created": "2025-05-10T17:50:07.057229049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1049070,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-05-10T17:51:06.548958168Z",
	            "FinishedAt": "2025-05-10T17:51:04.564798034Z"
	        },
	        "Image": "sha256:e9e814e304601d171cd7a05fe946703c6fbd63c3e77415c5bcfe31c3cddbbe5f",
	        "ResolvConfPath": "/var/lib/docker/containers/63ef81e147639f8a2c8ea835891fc2be0a5e82d2e68596f6895399d4134dc3dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/63ef81e147639f8a2c8ea835891fc2be0a5e82d2e68596f6895399d4134dc3dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/63ef81e147639f8a2c8ea835891fc2be0a5e82d2e68596f6895399d4134dc3dc/hosts",
	        "LogPath": "/var/lib/docker/containers/63ef81e147639f8a2c8ea835891fc2be0a5e82d2e68596f6895399d4134dc3dc/63ef81e147639f8a2c8ea835891fc2be0a5e82d2e68596f6895399d4134dc3dc-json.log",
	        "Name": "/embed-certs-256321",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-256321:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-256321",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "63ef81e147639f8a2c8ea835891fc2be0a5e82d2e68596f6895399d4134dc3dc",
	                "LowerDir": "/var/lib/docker/overlay2/834ab38eb942c563cbabcd3318ac006a46874b5d728dc4c4bc5935dfdbab3d7a-init/diff:/var/lib/docker/overlay2/d562a19931b28d74981554e3e67ffc7804c8c483ec96f024e40ef2be1bf23f73/diff",
	                "MergedDir": "/var/lib/docker/overlay2/834ab38eb942c563cbabcd3318ac006a46874b5d728dc4c4bc5935dfdbab3d7a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/834ab38eb942c563cbabcd3318ac006a46874b5d728dc4c4bc5935dfdbab3d7a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/834ab38eb942c563cbabcd3318ac006a46874b5d728dc4c4bc5935dfdbab3d7a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-256321",
	                "Source": "/var/lib/docker/volumes/embed-certs-256321/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-256321",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-256321",
	                "name.minikube.sigs.k8s.io": "embed-certs-256321",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7feda4654c2d46844b5b20e13b89d18059ad4da9e0e19e5020bd5ddd9aec57d",
	            "SandboxKey": "/var/run/docker/netns/a7feda4654c2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-256321": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:2c:5a:2f:2d:3a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ba2829c5de699830ad18d5fc13a225f391a5c001fb69b6f025c5df9f94898875",
	                    "EndpointID": "29ac505c1766abb3203531b1448a2d4fcd2e3076f2ec8757e8216a52971e763a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-256321",
	                        "63ef81e14763"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256321 -n embed-certs-256321
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-256321 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-256321 logs -n 25: (1.168570388s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-256321                                  | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-697935             | old-k8s-version-697935       | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-697935                              | old-k8s-version-697935       | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 17:50 UTC |                     |
	|         | default-k8s-diff-port-676255                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-058078                  | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-676255       | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 17:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-256321                 | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 17:51 UTC | 10 May 25 17:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-256321                                  | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 17:51 UTC | 10 May 25 17:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 17:51 UTC | 10 May 25 17:51 UTC |
	|         | default-k8s-diff-port-676255                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| image   | no-preload-058078 image list                           | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| delete  | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| start   | -p newest-cni-173135 --memory=2200 --alsologtostderr   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-173135             | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-173135                  | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-173135 --memory=2200 --alsologtostderr   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-173135 image list                           | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| delete  | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:52:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:52:39.942859 1062960 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:52:39.943098 1062960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:39.943129 1062960 out.go:358] Setting ErrFile to fd 2...
	I0510 17:52:39.943146 1062960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:39.943562 1062960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:52:39.944604 1062960 out.go:352] Setting JSON to false
	I0510 17:52:39.945997 1062960 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12907,"bootTime":1746886653,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:52:39.946130 1062960 start.go:140] virtualization: kvm guest
	I0510 17:52:39.948309 1062960 out.go:177] * [newest-cni-173135] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:52:39.949674 1062960 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:52:39.949716 1062960 notify.go:220] Checking for updates...
	I0510 17:52:39.952354 1062960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:52:39.953722 1062960 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:39.955058 1062960 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 17:52:39.956484 1062960 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:52:39.957799 1062960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:52:39.959587 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:39.960145 1062960 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:52:39.985577 1062960 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 17:52:39.985704 1062960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:52:40.035501 1062960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-05-10 17:52:40.02617924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:52:40.035611 1062960 docker.go:318] overlay module found
	I0510 17:52:40.037784 1062960 out.go:177] * Using the docker driver based on existing profile
	I0510 17:52:40.039108 1062960 start.go:304] selected driver: docker
	I0510 17:52:40.039123 1062960 start.go:908] validating driver "docker" against &{Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:40.039239 1062960 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:52:40.040135 1062960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:52:40.092965 1062960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-05-10 17:52:40.084143213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:52:40.093291 1062960 start_flags.go:994] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0510 17:52:40.093320 1062960 cni.go:84] Creating CNI manager for ""
	I0510 17:52:40.093383 1062960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 17:52:40.093421 1062960 start.go:347] cluster config:
	{Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:40.096146 1062960 out.go:177] * Starting "newest-cni-173135" primary control-plane node in "newest-cni-173135" cluster
	I0510 17:52:40.097483 1062960 cache.go:121] Beginning downloading kic base image for docker with crio
	I0510 17:52:40.098838 1062960 out.go:177] * Pulling base image v0.0.46-1746731792-20718 ...
	I0510 17:52:40.100016 1062960 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 17:52:40.100054 1062960 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 17:52:40.100073 1062960 cache.go:56] Caching tarball of preloaded images
	I0510 17:52:40.100128 1062960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local docker daemon
	I0510 17:52:40.100157 1062960 preload.go:172] Found /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 17:52:40.100165 1062960 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 17:52:40.100261 1062960 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/config.json ...
	I0510 17:52:40.120688 1062960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local docker daemon, skipping pull
	I0510 17:52:40.120714 1062960 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 exists in daemon, skipping load
	I0510 17:52:40.120734 1062960 cache.go:230] Successfully downloaded all kic artifacts
	I0510 17:52:40.120784 1062960 start.go:360] acquireMachinesLock for newest-cni-173135: {Name:mk75975d6daf4063f8ba79544d03229010ceb1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:52:40.120860 1062960 start.go:364] duration metric: took 50.497µs to acquireMachinesLock for "newest-cni-173135"
	I0510 17:52:40.120885 1062960 start.go:96] Skipping create...Using existing machine configuration
	I0510 17:52:40.120892 1062960 fix.go:54] fixHost starting: 
	I0510 17:52:40.121107 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:40.139354 1062960 fix.go:112] recreateIfNeeded on newest-cni-173135: state=Stopped err=<nil>
	W0510 17:52:40.139386 1062960 fix.go:138] unexpected machine state, will restart: <nil>
	I0510 17:52:40.141294 1062960 out.go:177] * Restarting existing docker container for "newest-cni-173135" ...
	W0510 17:52:39.629875 1044308 pod_ready.go:104] pod "etcd-old-k8s-version-697935" is not "Ready", error: <nil>
	W0510 17:52:41.630228 1044308 pod_ready.go:104] pod "etcd-old-k8s-version-697935" is not "Ready", error: <nil>
	I0510 17:52:43.131391 1044308 pod_ready.go:94] pod "etcd-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.131443 1044308 pod_ready.go:86] duration metric: took 50.006172737s for pod "etcd-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.134286 1044308 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.138012 1044308 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.138036 1044308 pod_ready.go:86] duration metric: took 3.724234ms for pod "kube-apiserver-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.140268 1044308 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.143330 1044308 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.143350 1044308 pod_ready.go:86] duration metric: took 3.063093ms for pod "kube-controller-manager-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.145633 1044308 pod_ready.go:83] waiting for pod "kube-proxy-8tdw4" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.329167 1044308 pod_ready.go:94] pod "kube-proxy-8tdw4" is "Ready"
	I0510 17:52:43.329196 1044308 pod_ready.go:86] duration metric: took 183.5398ms for pod "kube-proxy-8tdw4" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.529673 1044308 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.929860 1044308 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.929890 1044308 pod_ready.go:86] duration metric: took 400.187942ms for pod "kube-scheduler-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.929904 1044308 pod_ready.go:40] duration metric: took 1m22.819056587s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 17:52:43.974390 1044308 start.go:607] kubectl: 1.33.0, cluster: 1.20.0 (minor skew: 13)
	I0510 17:52:43.975971 1044308 out.go:201] 
	W0510 17:52:43.977399 1044308 out.go:270] ! /usr/local/bin/kubectl is version 1.33.0, which may have incompatibilities with Kubernetes 1.20.0.
	I0510 17:52:43.978880 1044308 out.go:177]   - Want kubectl v1.20.0? Try 'minikube kubectl -- get pods -A'
	I0510 17:52:43.980215 1044308 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-697935" cluster and "default" namespace by default
	I0510 17:52:40.142629 1062960 cli_runner.go:164] Run: docker start newest-cni-173135
	I0510 17:52:40.387277 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:40.406155 1062960 kic.go:430] container "newest-cni-173135" state is running.
	I0510 17:52:40.406603 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:40.425434 1062960 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/config.json ...
	I0510 17:52:40.425733 1062960 machine.go:93] provisionDockerMachine start ...
	I0510 17:52:40.425813 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:40.446701 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:40.446942 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:40.446954 1062960 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 17:52:40.447629 1062960 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38662->127.0.0.1:33504: read: connection reset by peer
	I0510 17:52:43.567334 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-173135
	
	I0510 17:52:43.567369 1062960 ubuntu.go:169] provisioning hostname "newest-cni-173135"
	I0510 17:52:43.567474 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:43.585810 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:43.586092 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:43.586114 1062960 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-173135 && echo "newest-cni-173135" | sudo tee /etc/hostname
	I0510 17:52:43.720075 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-173135
	
	I0510 17:52:43.720180 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:43.738458 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:43.738683 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:43.738700 1062960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-173135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-173135/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-173135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 17:52:43.860357 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 17:52:43.860392 1062960 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20720-722920/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-722920/.minikube}
	I0510 17:52:43.860425 1062960 ubuntu.go:177] setting up certificates
	I0510 17:52:43.860438 1062960 provision.go:84] configureAuth start
	I0510 17:52:43.860501 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:43.878837 1062960 provision.go:143] copyHostCerts
	I0510 17:52:43.878913 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem, removing ...
	I0510 17:52:43.878934 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem
	I0510 17:52:43.879010 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem (1078 bytes)
	I0510 17:52:43.879140 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem, removing ...
	I0510 17:52:43.879154 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem
	I0510 17:52:43.879187 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem (1123 bytes)
	I0510 17:52:43.879281 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem, removing ...
	I0510 17:52:43.879293 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem
	I0510 17:52:43.879328 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem (1675 bytes)
	I0510 17:52:43.879447 1062960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem org=jenkins.newest-cni-173135 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-173135]
	I0510 17:52:44.399990 1062960 provision.go:177] copyRemoteCerts
	I0510 17:52:44.400060 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 17:52:44.400097 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.417363 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:44.509498 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 17:52:44.533816 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0510 17:52:44.556664 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 17:52:44.579844 1062960 provision.go:87] duration metric: took 719.387116ms to configureAuth
	I0510 17:52:44.579874 1062960 ubuntu.go:193] setting minikube options for container-runtime
	I0510 17:52:44.580082 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:44.580225 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.597779 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:44.597997 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:44.598015 1062960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 17:52:44.861571 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 17:52:44.861603 1062960 machine.go:96] duration metric: took 4.435849898s to provisionDockerMachine
	I0510 17:52:44.861615 1062960 start.go:293] postStartSetup for "newest-cni-173135" (driver="docker")
	I0510 17:52:44.861633 1062960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 17:52:44.861696 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 17:52:44.861741 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.880393 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:44.968863 1062960 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 17:52:44.972444 1062960 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0510 17:52:44.972471 1062960 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0510 17:52:44.972479 1062960 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0510 17:52:44.972486 1062960 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0510 17:52:44.972499 1062960 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/addons for local assets ...
	I0510 17:52:44.972551 1062960 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/files for local assets ...
	I0510 17:52:44.972632 1062960 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem -> 7298152.pem in /etc/ssl/certs
	I0510 17:52:44.972715 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 17:52:44.981250 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem --> /etc/ssl/certs/7298152.pem (1708 bytes)
	I0510 17:52:45.004513 1062960 start.go:296] duration metric: took 142.88043ms for postStartSetup
	I0510 17:52:45.004636 1062960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 17:52:45.004699 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.022563 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.108643 1062960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0510 17:52:45.113165 1062960 fix.go:56] duration metric: took 4.992266927s for fixHost
	I0510 17:52:45.113190 1062960 start.go:83] releasing machines lock for "newest-cni-173135", held for 4.992317581s
	I0510 17:52:45.113270 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:45.130656 1062960 ssh_runner.go:195] Run: cat /version.json
	I0510 17:52:45.130728 1062960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 17:52:45.130785 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.130732 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.149250 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.153557 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.235894 1062960 ssh_runner.go:195] Run: systemctl --version
	I0510 17:52:45.328928 1062960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 17:52:45.467882 1062960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0510 17:52:45.472485 1062960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 17:52:45.480914 1062960 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0510 17:52:45.480989 1062960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 17:52:45.489392 1062960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0510 17:52:45.489423 1062960 start.go:495] detecting cgroup driver to use...
	I0510 17:52:45.489464 1062960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0510 17:52:45.489535 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 17:52:45.501274 1062960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 17:52:45.512452 1062960 docker.go:225] disabling cri-docker service (if available) ...
	I0510 17:52:45.512528 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 17:52:45.524828 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 17:52:45.535636 1062960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 17:52:45.618303 1062960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 17:52:45.695586 1062960 docker.go:241] disabling docker service ...
	I0510 17:52:45.695664 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 17:52:45.707968 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 17:52:45.719029 1062960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 17:52:45.800197 1062960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 17:52:45.887455 1062960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 17:52:45.898860 1062960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 17:52:45.914760 1062960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 17:52:45.914818 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.924202 1062960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 17:52:45.924260 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.933839 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.944911 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.954202 1062960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 17:52:45.962950 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.972583 1062960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.981599 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.991016 1062960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 17:52:45.999017 1062960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 17:52:46.007316 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:46.090516 1062960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 17:52:46.208208 1062960 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 17:52:46.208290 1062960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 17:52:46.212169 1062960 start.go:563] Will wait 60s for crictl version
	I0510 17:52:46.212233 1062960 ssh_runner.go:195] Run: which crictl
	I0510 17:52:46.215714 1062960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 17:52:46.250179 1062960 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0510 17:52:46.250256 1062960 ssh_runner.go:195] Run: crio --version
	I0510 17:52:46.286288 1062960 ssh_runner.go:195] Run: crio --version
	I0510 17:52:46.324763 1062960 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.24.6 ...
	I0510 17:52:46.326001 1062960 cli_runner.go:164] Run: docker network inspect newest-cni-173135 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0510 17:52:46.342321 1062960 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0510 17:52:46.346220 1062960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:52:46.358987 1062960 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0510 17:52:46.360438 1062960 kubeadm.go:875] updating cluster {Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 17:52:46.360585 1062960 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 17:52:46.360654 1062960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:52:46.402300 1062960 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 17:52:46.402322 1062960 crio.go:433] Images already preloaded, skipping extraction
	I0510 17:52:46.402371 1062960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:52:46.438279 1062960 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 17:52:46.438310 1062960 cache_images.go:84] Images are preloaded, skipping loading
	I0510 17:52:46.438321 1062960 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.33.0 crio true true} ...
	I0510 17:52:46.438480 1062960 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-173135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 17:52:46.438582 1062960 ssh_runner.go:195] Run: crio config
	I0510 17:52:46.483257 1062960 cni.go:84] Creating CNI manager for ""
	I0510 17:52:46.483281 1062960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 17:52:46.483292 1062960 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0510 17:52:46.483315 1062960 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-173135 NodeName:newest-cni-173135 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 17:52:46.483479 1062960 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-173135"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 17:52:46.483553 1062960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 17:52:46.492414 1062960 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 17:52:46.492500 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 17:52:46.501119 1062960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0510 17:52:46.518140 1062960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 17:52:46.535112 1062960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0510 17:52:46.551871 1062960 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0510 17:52:46.555171 1062960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:52:46.565729 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:46.652845 1062960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:52:46.666063 1062960 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135 for IP: 192.168.94.2
	I0510 17:52:46.666087 1062960 certs.go:194] generating shared ca certs ...
	I0510 17:52:46.666108 1062960 certs.go:226] acquiring lock for ca certs: {Name:mk27922925b9822e089551ad68cc2984cd622bc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:46.666267 1062960 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key
	I0510 17:52:46.666346 1062960 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key
	I0510 17:52:46.666367 1062960 certs.go:256] generating profile certs ...
	I0510 17:52:46.666488 1062960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/client.key
	I0510 17:52:46.666575 1062960 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.key.eac5560e
	I0510 17:52:46.666638 1062960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.key
	I0510 17:52:46.666788 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815.pem (1338 bytes)
	W0510 17:52:46.666836 1062960 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815_empty.pem, impossibly tiny 0 bytes
	I0510 17:52:46.666855 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 17:52:46.666891 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem (1078 bytes)
	I0510 17:52:46.666924 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem (1123 bytes)
	I0510 17:52:46.666954 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem (1675 bytes)
	I0510 17:52:46.667014 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem (1708 bytes)
	I0510 17:52:46.667736 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 17:52:46.694046 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0510 17:52:46.720567 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 17:52:46.750803 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0510 17:52:46.783126 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0510 17:52:46.861172 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 17:52:46.886437 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 17:52:46.909743 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 17:52:46.932746 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem --> /usr/share/ca-certificates/7298152.pem (1708 bytes)
	I0510 17:52:46.955864 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 17:52:46.978875 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815.pem --> /usr/share/ca-certificates/729815.pem (1338 bytes)
	I0510 17:52:47.001846 1062960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 17:52:47.018936 1062960 ssh_runner.go:195] Run: openssl version
	I0510 17:52:47.024207 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 17:52:47.033345 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.036756 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 16:54 /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.036814 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.043306 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 17:52:47.051810 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/729815.pem && ln -fs /usr/share/ca-certificates/729815.pem /etc/ssl/certs/729815.pem"
	I0510 17:52:47.060972 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.064315 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 17:06 /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.064361 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.070986 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/729815.pem /etc/ssl/certs/51391683.0"
	I0510 17:52:47.079952 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7298152.pem && ln -fs /usr/share/ca-certificates/7298152.pem /etc/ssl/certs/7298152.pem"
	I0510 17:52:47.089676 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.093441 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 17:06 /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.093504 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.100198 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7298152.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 17:52:47.108827 1062960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 17:52:47.112497 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 17:52:47.119081 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 17:52:47.125525 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 17:52:47.131948 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 17:52:47.138247 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 17:52:47.145052 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 17:52:47.152189 1062960 kubeadm.go:392] StartCluster: {Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:47.152299 1062960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 17:52:47.152356 1062960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 17:52:47.190954 1062960 cri.go:89] found id: ""
	I0510 17:52:47.191057 1062960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 17:52:47.200662 1062960 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0510 17:52:47.200683 1062960 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0510 17:52:47.200729 1062960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0510 17:52:47.210371 1062960 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0510 17:52:47.211583 1062960 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-173135" does not appear in /home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:47.212205 1062960 kubeconfig.go:62] /home/jenkins/minikube-integration/20720-722920/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-173135" cluster setting kubeconfig missing "newest-cni-173135" context setting]
	I0510 17:52:47.213167 1062960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/kubeconfig: {Name:mk9fb87a04495b85d7d2d831cf7e181b64e065fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.215451 1062960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0510 17:52:47.225765 1062960 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0510 17:52:47.225809 1062960 kubeadm.go:593] duration metric: took 25.118512ms to restartPrimaryControlPlane
	I0510 17:52:47.225823 1062960 kubeadm.go:394] duration metric: took 73.645898ms to StartCluster
	I0510 17:52:47.225844 1062960 settings.go:142] acquiring lock: {Name:mkb5ef074e3901ac961cf1a29314fa6c725c1890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.225925 1062960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:47.227600 1062960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/kubeconfig: {Name:mk9fb87a04495b85d7d2d831cf7e181b64e065fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.227929 1062960 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 17:52:47.228146 1062960 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0510 17:52:47.228262 1062960 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-173135"
	I0510 17:52:47.228286 1062960 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-173135"
	W0510 17:52:47.228300 1062960 addons.go:247] addon storage-provisioner should already be in state true
	I0510 17:52:47.228322 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:47.228340 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.228374 1062960 addons.go:69] Setting default-storageclass=true in profile "newest-cni-173135"
	I0510 17:52:47.228389 1062960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-173135"
	I0510 17:52:47.228696 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.228794 1062960 addons.go:69] Setting metrics-server=true in profile "newest-cni-173135"
	I0510 17:52:47.228819 1062960 addons.go:238] Setting addon metrics-server=true in "newest-cni-173135"
	W0510 17:52:47.228830 1062960 addons.go:247] addon metrics-server should already be in state true
	I0510 17:52:47.228871 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.228905 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.229098 1062960 addons.go:69] Setting dashboard=true in profile "newest-cni-173135"
	I0510 17:52:47.229122 1062960 addons.go:238] Setting addon dashboard=true in "newest-cni-173135"
	W0510 17:52:47.229131 1062960 addons.go:247] addon dashboard should already be in state true
	I0510 17:52:47.229160 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.229350 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.229636 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.231952 1062960 out.go:177] * Verifying Kubernetes components...
	I0510 17:52:47.233708 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:47.257836 1062960 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0510 17:52:47.259786 1062960 addons.go:238] Setting addon default-storageclass=true in "newest-cni-173135"
	W0510 17:52:47.259808 1062960 addons.go:247] addon default-storageclass should already be in state true
	I0510 17:52:47.259842 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.260502 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0510 17:52:47.260520 1062960 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0510 17:52:47.260587 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.260894 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.269485 1062960 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0510 17:52:47.270561 1062960 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0510 17:52:47.271826 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0510 17:52:47.271848 1062960 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0510 17:52:47.271913 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.273848 1062960 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 17:52:47.275490 1062960 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:52:47.275521 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0510 17:52:47.275721 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.287652 1062960 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:47.287676 1062960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0510 17:52:47.287737 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.300295 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.308088 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.314958 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.317183 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.570630 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:47.644300 1062960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:52:47.648111 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0510 17:52:47.648144 1062960 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0510 17:52:47.745020 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0510 17:52:47.745054 1062960 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0510 17:52:47.746206 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:52:47.753235 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0510 17:52:47.753267 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0510 17:52:47.852275 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0510 17:52:47.852309 1062960 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0510 17:52:47.854261 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0510 17:52:47.854291 1062960 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0510 17:52:47.957529 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0510 17:52:47.957561 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0510 17:52:47.962427 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:52:47.962453 1062960 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0510 17:52:47.967141 1062960 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0510 17:52:47.967185 1062960 retry.go:31] will retry after 329.411117ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0510 17:52:47.967271 1062960 api_server.go:52] waiting for apiserver process to appear ...
	I0510 17:52:47.967381 1062960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 17:52:48.055318 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0510 17:52:48.055400 1062960 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0510 17:52:48.060787 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:52:48.149914 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0510 17:52:48.149947 1062960 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0510 17:52:48.175035 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0510 17:52:48.175070 1062960 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0510 17:52:48.263718 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0510 17:52:48.263750 1062960 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0510 17:52:48.282195 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 17:52:48.282227 1062960 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0510 17:52:48.297636 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:48.359369 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 17:52:52.345196 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.598944537s)
	I0510 17:52:52.345534 1062960 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.378119806s)
	I0510 17:52:52.345610 1062960 api_server.go:72] duration metric: took 5.117639828s to wait for apiserver process to appear ...
	I0510 17:52:52.345622 1062960 api_server.go:88] waiting for apiserver healthz status ...
	I0510 17:52:52.345683 1062960 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0510 17:52:52.350659 1062960 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0510 17:52:52.350693 1062960 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0510 17:52:52.462305 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.401465129s)
	I0510 17:52:52.462425 1062960 addons.go:479] Verifying addon metrics-server=true in "newest-cni-173135"
	I0510 17:52:52.462366 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.164694895s)
	I0510 17:52:52.558877 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.199364581s)
	I0510 17:52:52.560719 1062960 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-173135 addons enable metrics-server
	
	I0510 17:52:52.562364 1062960 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0510 17:52:52.563698 1062960 addons.go:514] duration metric: took 5.33556927s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0510 17:52:52.846151 1062960 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0510 17:52:52.850590 1062960 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0510 17:52:52.851935 1062960 api_server.go:141] control plane version: v1.33.0
	I0510 17:52:52.851968 1062960 api_server.go:131] duration metric: took 506.335848ms to wait for apiserver health ...
	I0510 17:52:52.851979 1062960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 17:52:52.855964 1062960 system_pods.go:59] 9 kube-system pods found
	I0510 17:52:52.856013 1062960 system_pods.go:61] "coredns-674b8bbfcf-l2m27" [11b63e72-35af-4a70-a7d3-b11e18104e2e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856039 1062960 system_pods.go:61] "etcd-newest-cni-173135" [60c35044-778d-45d4-8d96-e58efbd9b54b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 17:52:52.856062 1062960 system_pods.go:61] "kindnet-5nzlt" [9158a53c-5cd1-426c-a255-37618e292899] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0510 17:52:52.856073 1062960 system_pods.go:61] "kube-apiserver-newest-cni-173135" [790eeefa-f593-4148-b5f3-43bf9807166f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 17:52:52.856085 1062960 system_pods.go:61] "kube-controller-manager-newest-cni-173135" [75bdb232-66d8-442a-8566-34a3d4674876] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 17:52:52.856096 1062960 system_pods.go:61] "kube-proxy-v2tt7" [e502d755-4ecb-4567-9259-547f7c063830] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0510 17:52:52.856108 1062960 system_pods.go:61] "kube-scheduler-newest-cni-173135" [8bfc0953-197d-4185-b2e7-6e1a2d97a8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 17:52:52.856117 1062960 system_pods.go:61] "metrics-server-f79f97bbb-z4g7z" [a6bcfd5e-6f32-43ef-a6e7-336c90faf9ff] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856125 1062960 system_pods.go:61] "storage-provisioner" [effda141-cd8d-4f87-97a1-9166c59e1de0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856132 1062960 system_pods.go:74] duration metric: took 4.146105ms to wait for pod list to return data ...
	I0510 17:52:52.856143 1062960 default_sa.go:34] waiting for default service account to be created ...
	I0510 17:52:52.858633 1062960 default_sa.go:45] found service account: "default"
	I0510 17:52:52.858658 1062960 default_sa.go:55] duration metric: took 2.507165ms for default service account to be created ...
	I0510 17:52:52.858670 1062960 kubeadm.go:578] duration metric: took 5.630701473s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0510 17:52:52.858701 1062960 node_conditions.go:102] verifying NodePressure condition ...
	I0510 17:52:52.861375 1062960 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0510 17:52:52.861398 1062960 node_conditions.go:123] node cpu capacity is 8
	I0510 17:52:52.861411 1062960 node_conditions.go:105] duration metric: took 2.704535ms to run NodePressure ...
	I0510 17:52:52.861422 1062960 start.go:241] waiting for startup goroutines ...
	I0510 17:52:52.861431 1062960 start.go:246] waiting for cluster config update ...
	I0510 17:52:52.861444 1062960 start.go:255] writing updated cluster config ...
	I0510 17:52:52.861692 1062960 ssh_runner.go:195] Run: rm -f paused
	I0510 17:52:52.918445 1062960 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 17:52:52.920711 1062960 out.go:177] * Done! kubectl is now configured to use "newest-cni-173135" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 10 17:59:17 embed-certs-256321 crio[675]: time="2025-05-10 17:59:17.270157898Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=6aeb51a6-7402-4977-bfd3-d8b95a06cd47 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:59:28 embed-certs-256321 crio[675]: time="2025-05-10 17:59:28.269800165Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=ad002a03-884e-4939-bdda-2d30153ba73b name=/runtime.v1.ImageService/ImageStatus
	May 10 17:59:28 embed-certs-256321 crio[675]: time="2025-05-10 17:59:28.270096076Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=ad002a03-884e-4939-bdda-2d30153ba73b name=/runtime.v1.ImageService/ImageStatus
	May 10 17:59:31 embed-certs-256321 crio[675]: time="2025-05-10 17:59:31.270183963Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=952d4ba1-fd41-40b5-be4e-5d8eb74653e0 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:59:31 embed-certs-256321 crio[675]: time="2025-05-10 17:59:31.270489071Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=952d4ba1-fd41-40b5-be4e-5d8eb74653e0 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:59:31 embed-certs-256321 crio[675]: time="2025-05-10 17:59:31.271044036Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=0eccfc38-71d2-4304-b060-8bedaaef3997 name=/runtime.v1.ImageService/PullImage
	May 10 17:59:31 embed-certs-256321 crio[675]: time="2025-05-10 17:59:31.310451045Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 17:59:42 embed-certs-256321 crio[675]: time="2025-05-10 17:59:42.270205333Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=96a393d7-a852-456b-9d3f-25f4cedf4cac name=/runtime.v1.ImageService/ImageStatus
	May 10 17:59:42 embed-certs-256321 crio[675]: time="2025-05-10 17:59:42.270471764Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=96a393d7-a852-456b-9d3f-25f4cedf4cac name=/runtime.v1.ImageService/ImageStatus
	May 10 17:59:53 embed-certs-256321 crio[675]: time="2025-05-10 17:59:53.270310788Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=dc20a28a-ad07-4bd9-ab22-a09cd1a6f2b4 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:59:53 embed-certs-256321 crio[675]: time="2025-05-10 17:59:53.270536475Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=dc20a28a-ad07-4bd9-ab22-a09cd1a6f2b4 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:08 embed-certs-256321 crio[675]: time="2025-05-10 18:00:08.269532745Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=c2c41f7c-ef4d-4d4a-8f2f-9ea64aebf125 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:08 embed-certs-256321 crio[675]: time="2025-05-10 18:00:08.269831580Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=c2c41f7c-ef4d-4d4a-8f2f-9ea64aebf125 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:17 embed-certs-256321 crio[675]: time="2025-05-10 18:00:17.269502167Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=5994d879-2c23-47e6-9a71-265c8a9c312d name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:17 embed-certs-256321 crio[675]: time="2025-05-10 18:00:17.269793066Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=5994d879-2c23-47e6-9a71-265c8a9c312d name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:19 embed-certs-256321 crio[675]: time="2025-05-10 18:00:19.269853093Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=408ccf7c-ff95-42c2-bdf1-27f731df247f name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:19 embed-certs-256321 crio[675]: time="2025-05-10 18:00:19.270129809Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=408ccf7c-ff95-42c2-bdf1-27f731df247f name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:32 embed-certs-256321 crio[675]: time="2025-05-10 18:00:32.270081195Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0fdb3874-15d5-4eaa-995a-0c350d825574 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:32 embed-certs-256321 crio[675]: time="2025-05-10 18:00:32.270081336Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=03e69af0-5991-4334-9e35-5af4ccc8feb1 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:32 embed-certs-256321 crio[675]: time="2025-05-10 18:00:32.270385676Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0fdb3874-15d5-4eaa-995a-0c350d825574 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:32 embed-certs-256321 crio[675]: time="2025-05-10 18:00:32.270450677Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=03e69af0-5991-4334-9e35-5af4ccc8feb1 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:46 embed-certs-256321 crio[675]: time="2025-05-10 18:00:46.270089287Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=91c79159-363f-44ce-8a86-bea636aa179d name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:46 embed-certs-256321 crio[675]: time="2025-05-10 18:00:46.270407830Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=91c79159-363f-44ce-8a86-bea636aa179d name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:47 embed-certs-256321 crio[675]: time="2025-05-10 18:00:47.270006357Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0b51aad1-17dc-402e-b262-ccc52600411a name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:47 embed-certs-256321 crio[675]: time="2025-05-10 18:00:47.270245155Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0b51aad1-17dc-402e-b262-ccc52600411a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	97b473b20b0ec       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   91535c8944806       dashboard-metrics-scraper-86c6bf9756-8cgkk
	d9b57107e62b1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   1ec93a8aee78a       storage-provisioner
	72e8906e39fad       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   9 minutes ago       Running             coredns                     1                   20e201db64160       coredns-674b8bbfcf-p95ml
	32ba395c226fe       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   9993c47e23f6c       busybox
	bff80d566cd79       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68   9 minutes ago       Running             kube-proxy                  1                   57606f067b007       kube-proxy-4r9lw
	2e6f6081751ab       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f   9 minutes ago       Running             kindnet-cni                 1                   2e987c81482cc       kindnet-gz4vh
	65d5a65ecf063       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   1ec93a8aee78a       storage-provisioner
	b9151b983cbd7       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02   9 minutes ago       Running             kube-controller-manager     1                   150a5fe20345e       kube-controller-manager-embed-certs-256321
	98130845020bf       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4   9 minutes ago       Running             kube-apiserver              1                   c2c4389db1e8d       kube-apiserver-embed-certs-256321
	a5fd3191197b5       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   9 minutes ago       Running             etcd                        1                   b26c1247c448b       etcd-embed-certs-256321
	b210e16e87728       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4   9 minutes ago       Running             kube-scheduler              1                   b6ecefecbcc99       kube-scheduler-embed-certs-256321
	
	
	==> coredns [72e8906e39fadba197e2807b95680114dec737c392e60b99240271e920481151] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:34581 - 36001 "HINFO IN 1083514910540834653.3508312887700292770. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056057215s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-256321
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-256321
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=embed-certs-256321
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T17_50_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 17:50:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-256321
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 18:00:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 17:56:56 +0000   Sat, 10 May 2025 17:50:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 17:56:56 +0000   Sat, 10 May 2025 17:50:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 17:56:56 +0000   Sat, 10 May 2025 17:50:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 17:56:56 +0000   Sat, 10 May 2025 17:50:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-256321
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	System Info:
	  Machine ID:                 8bd94e85a446493a8ec17c6b0e53f440
	  System UUID:                f0aac67c-af15-467d-8e38-520b3e855bab
	  Boot ID:                    cf43504f-fb83-4d4b-9ff6-27d975437043
	  Kernel Version:             5.15.0-1081-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-674b8bbfcf-p95ml                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-embed-certs-256321                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-gz4vh                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-embed-certs-256321             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-embed-certs-256321    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-4r9lw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-embed-certs-256321             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-f79f97bbb-cts6m                100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-8cgkk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m32s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-cmxkz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 9m35s                  kube-proxy       
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x9 over 10m)      kubelet          Node embed-certs-256321 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node embed-certs-256321 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node embed-certs-256321 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node embed-certs-256321 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node embed-certs-256321 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node embed-certs-256321 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           10m                    node-controller  Node embed-certs-256321 event: Registered Node embed-certs-256321 in Controller
	  Normal   NodeReady                10m                    kubelet          Node embed-certs-256321 status is now: NodeReady
	  Normal   Starting                 9m43s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m43s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m43s (x8 over 9m43s)  kubelet          Node embed-certs-256321 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m43s (x8 over 9m43s)  kubelet          Node embed-certs-256321 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m43s (x8 over 9m43s)  kubelet          Node embed-certs-256321 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m33s                  node-controller  Node embed-certs-256321 event: Registered Node embed-certs-256321 in Controller
	
	
	==> dmesg <==
	[  +1.019813] net_ratelimit: 3 callbacks suppressed
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000003] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000002] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +4.095573] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000007] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000001] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000002] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +3.075626] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000001] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +0.000001] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000002] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +1.019906] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000006] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	
	
	==> etcd [a5fd3191197b5cbef87a2bcc3b8106b810ee03e659a75e84f00ef7ee10c9e4c4] <==
	{"level":"info","ts":"2025-05-10T17:51:15.680483Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-05-10T17:51:15.680519Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-05-10T17:51:15.680801Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-05-10T17:51:15.680857Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-05-10T17:51:15.681575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-05-10T17:51:15.681700Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-05-10T17:51:15.681840Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T17:51:15.681906Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T17:51:16.987883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-05-10T17:51:16.988055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T17:51:16.988148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-05-10T17:51:16.988224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.988287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.988328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.988369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.989748Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-256321 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:51:16.989964Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:51:16.990947Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:51:16.991057Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:51:16.991104Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:51:16.992011Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:51:16.992596Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:51:16.998365Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:51:16.996873Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-05-10T17:52:12.755966Z","caller":"traceutil/trace.go:171","msg":"trace[1146970826] transaction","detail":"{read_only:false; response_revision:722; number_of_response:1; }","duration":"121.70152ms","start":"2025-05-10T17:52:12.634238Z","end":"2025-05-10T17:52:12.755939Z","steps":["trace[1146970826] 'process raft request'  (duration: 60.043883ms)","trace[1146970826] 'compare'  (duration: 61.531583ms)"],"step_count":2}
	
	
	==> kernel <==
	 18:00:57 up  3:43,  0 users,  load average: 0.99, 1.24, 3.41
	Linux embed-certs-256321 5.15.0-1081-gcp #90~20.04.1-Ubuntu SMP Fri Apr 4 18:55:17 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [2e6f6081751ab3dd4a46da09cb4f2d486b3687d166d051a39658de4b696f8fa9] <==
	I0510 17:58:52.364977       1 main.go:301] handling current node
	I0510 17:59:02.367559       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 17:59:02.367631       1 main.go:301] handling current node
	I0510 17:59:12.371490       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 17:59:12.371530       1 main.go:301] handling current node
	I0510 17:59:22.365213       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 17:59:22.365246       1 main.go:301] handling current node
	I0510 17:59:32.367511       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 17:59:32.367546       1 main.go:301] handling current node
	I0510 17:59:42.373182       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 17:59:42.373225       1 main.go:301] handling current node
	I0510 17:59:52.374252       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 17:59:52.374297       1 main.go:301] handling current node
	I0510 18:00:02.367553       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:00:02.367591       1 main.go:301] handling current node
	I0510 18:00:12.364582       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:00:12.364617       1 main.go:301] handling current node
	I0510 18:00:22.365481       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:00:22.365530       1 main.go:301] handling current node
	I0510 18:00:32.365193       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:00:32.365263       1 main.go:301] handling current node
	I0510 18:00:42.372562       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:00:42.372601       1 main.go:301] handling current node
	I0510 18:00:52.367489       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:00:52.367522       1 main.go:301] handling current node
	
	
	==> kube-apiserver [98130845020bfd267f9f378931eeb53eaa3893e68929464d0cb566065d00d6ad] <==
	E0510 17:56:20.788102       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0510 17:56:20.788129       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0510 17:56:20.789219       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 17:56:20.789234       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0510 17:57:20.792596       1 handler_proxy.go:99] no RequestInfo found in the context
	W0510 17:57:20.792596       1 handler_proxy.go:99] no RequestInfo found in the context
	E0510 17:57:20.792693       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0510 17:57:20.792705       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0510 17:57:20.794154       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 17:57:20.794173       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0510 17:59:20.794950       1 handler_proxy.go:99] no RequestInfo found in the context
	W0510 17:59:20.794956       1 handler_proxy.go:99] no RequestInfo found in the context
	E0510 17:59:20.795017       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0510 17:59:20.795085       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0510 17:59:20.796155       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 17:59:20.796176       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b9151b983cbd7c76d6ad0b5e6cfe26884bf60f230c413c6cdb1c2d656894acbe] <==
	I0510 17:54:55.578891       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:55:25.129278       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:55:25.585809       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:55:55.134459       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:55:55.593114       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:56:25.139869       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:56:25.600336       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:56:55.144966       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:56:55.607605       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:57:25.150964       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:57:25.614991       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:57:55.156173       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:57:55.621300       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:58:25.161879       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:58:25.628460       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:58:55.167959       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:58:55.634833       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:59:25.173747       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:59:25.641757       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:59:55.178758       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:59:55.648252       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:00:25.184745       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:00:25.655175       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:00:55.189836       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:00:55.662310       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [bff80d566cd7912b97b844e6de59e8652f0e6a7b718b5e30a5f2ba765dfdb71e] <==
	I0510 17:51:21.870802       1 server_linux.go:63] "Using iptables proxy"
	I0510 17:51:22.185314       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0510 17:51:22.185392       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:51:22.455859       1 server.go:254] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0510 17:51:22.456001       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:51:22.546609       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:51:22.547081       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:51:22.547122       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:51:22.548802       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:51:22.548946       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:51:22.548882       1 config.go:199] "Starting service config controller"
	I0510 17:51:22.549067       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:51:22.548890       1 config.go:329] "Starting node config controller"
	I0510 17:51:22.549212       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:51:22.549035       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:51:22.549339       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:51:22.649113       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:51:22.649203       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:51:22.649314       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0510 17:51:22.650386       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b210e16e877287bbae17030b542059e839ae27acda0111b26e777258af9f7e2f] <==
	I0510 17:51:17.893416       1 serving.go:386] Generated self-signed cert in-memory
	I0510 17:51:22.271568       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 17:51:22.271720       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:51:22.279578       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0510 17:51:22.279614       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:51:22.279625       1 shared_informer.go:350] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0510 17:51:22.279633       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:51:22.279658       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0510 17:51:22.279667       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0510 17:51:22.280043       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 17:51:22.280126       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 17:51:22.380034       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0510 17:51:22.380170       1 shared_informer.go:357] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0510 17:51:22.380898       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	May 10 18:00:08 embed-certs-256321 kubelet[813]: E0510 18:00:08.270096     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cts6m" podUID="9f1f391b-b287-4d6a-9ee2-2b0d20b7f6f6"
	May 10 18:00:14 embed-certs-256321 kubelet[813]: E0510 18:00:14.229206     813 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900014229037425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:14 embed-certs-256321 kubelet[813]: E0510 18:00:14.229240     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900014229037425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:16 embed-certs-256321 kubelet[813]: I0510 18:00:16.269814     813 scope.go:117] "RemoveContainer" containerID="97b473b20b0ec766a805edfb3ca4f2d375478b30208d33d9d1097e619e3948d2"
	May 10 18:00:16 embed-certs-256321 kubelet[813]: E0510 18:00:16.270087     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-8cgkk_kubernetes-dashboard(3a17b903-4797-436e-9d01-33bbf8aba9f3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-8cgkk" podUID="3a17b903-4797-436e-9d01-33bbf8aba9f3"
	May 10 18:00:17 embed-certs-256321 kubelet[813]: E0510 18:00:17.270127     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-cmxkz" podUID="c123f562-4744-4a16-98d1-fce9d4f44d5c"
	May 10 18:00:19 embed-certs-256321 kubelet[813]: E0510 18:00:19.270496     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cts6m" podUID="9f1f391b-b287-4d6a-9ee2-2b0d20b7f6f6"
	May 10 18:00:24 embed-certs-256321 kubelet[813]: E0510 18:00:24.230584     813 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900024230348172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:24 embed-certs-256321 kubelet[813]: E0510 18:00:24.230631     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900024230348172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:28 embed-certs-256321 kubelet[813]: I0510 18:00:28.269903     813 scope.go:117] "RemoveContainer" containerID="97b473b20b0ec766a805edfb3ca4f2d375478b30208d33d9d1097e619e3948d2"
	May 10 18:00:28 embed-certs-256321 kubelet[813]: E0510 18:00:28.270162     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-8cgkk_kubernetes-dashboard(3a17b903-4797-436e-9d01-33bbf8aba9f3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-8cgkk" podUID="3a17b903-4797-436e-9d01-33bbf8aba9f3"
	May 10 18:00:32 embed-certs-256321 kubelet[813]: E0510 18:00:32.270721     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-cmxkz" podUID="c123f562-4744-4a16-98d1-fce9d4f44d5c"
	May 10 18:00:32 embed-certs-256321 kubelet[813]: E0510 18:00:32.270782     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cts6m" podUID="9f1f391b-b287-4d6a-9ee2-2b0d20b7f6f6"
	May 10 18:00:34 embed-certs-256321 kubelet[813]: E0510 18:00:34.232315     813 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900034232084756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:34 embed-certs-256321 kubelet[813]: E0510 18:00:34.232359     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900034232084756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:43 embed-certs-256321 kubelet[813]: I0510 18:00:43.269937     813 scope.go:117] "RemoveContainer" containerID="97b473b20b0ec766a805edfb3ca4f2d375478b30208d33d9d1097e619e3948d2"
	May 10 18:00:43 embed-certs-256321 kubelet[813]: E0510 18:00:43.270177     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-8cgkk_kubernetes-dashboard(3a17b903-4797-436e-9d01-33bbf8aba9f3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-8cgkk" podUID="3a17b903-4797-436e-9d01-33bbf8aba9f3"
	May 10 18:00:44 embed-certs-256321 kubelet[813]: E0510 18:00:44.234233     813 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900044233985678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:44 embed-certs-256321 kubelet[813]: E0510 18:00:44.234278     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900044233985678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:46 embed-certs-256321 kubelet[813]: E0510 18:00:46.270737     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-cmxkz" podUID="c123f562-4744-4a16-98d1-fce9d4f44d5c"
	May 10 18:00:47 embed-certs-256321 kubelet[813]: E0510 18:00:47.270579     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cts6m" podUID="9f1f391b-b287-4d6a-9ee2-2b0d20b7f6f6"
	May 10 18:00:54 embed-certs-256321 kubelet[813]: E0510 18:00:54.235876     813 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900054235652900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:54 embed-certs-256321 kubelet[813]: E0510 18:00:54.235916     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900054235652900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:57 embed-certs-256321 kubelet[813]: I0510 18:00:57.269370     813 scope.go:117] "RemoveContainer" containerID="97b473b20b0ec766a805edfb3ca4f2d375478b30208d33d9d1097e619e3948d2"
	May 10 18:00:57 embed-certs-256321 kubelet[813]: E0510 18:00:57.269639     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-8cgkk_kubernetes-dashboard(3a17b903-4797-436e-9d01-33bbf8aba9f3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-8cgkk" podUID="3a17b903-4797-436e-9d01-33bbf8aba9f3"
	
	
	==> storage-provisioner [65d5a65ecf063411062857526ea3f59338a709368676895138b1e2978719d99f] <==
	I0510 17:51:21.157830       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0510 17:51:51.160969       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d9b57107e62b11acfb2b0469a6d55921e056261237dd7c55ed30e6e552460968] <==
	W0510 18:00:32.894572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:34.898543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:34.902653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:36.905667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:36.909850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:38.913452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:38.918326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:40.921665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:40.925449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:42.928665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:42.933941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:44.937330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:44.941162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:46.943826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:46.947915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:48.950691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:48.955656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:50.958478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:50.962343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:52.965413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:52.969507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:54.972715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:54.977556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:56.981069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:56.985273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-256321 -n embed-certs-256321
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-256321 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-cts6m kubernetes-dashboard-7779f9b69b-cmxkz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-256321 describe pod metrics-server-f79f97bbb-cts6m kubernetes-dashboard-7779f9b69b-cmxkz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-256321 describe pod metrics-server-f79f97bbb-cts6m kubernetes-dashboard-7779f9b69b-cmxkz: exit status 1 (61.079576ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-cts6m" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-cmxkz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-256321 describe pod metrics-server-f79f97bbb-cts6m kubernetes-dashboard-7779f9b69b-cmxkz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-tq4tr" [62e44ee1-f320-4a22-bf54-04c5efdd417e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-676255 -n default-k8s-diff-port-676255
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-05-10 18:00:59.899490439 +0000 UTC m=+3999.581277572
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-676255 describe po kubernetes-dashboard-7779f9b69b-tq4tr -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-676255 describe po kubernetes-dashboard-7779f9b69b-tq4tr -n kubernetes-dashboard:
Name:             kubernetes-dashboard-7779f9b69b-tq4tr
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-676255/192.168.85.2
Start Time:       Sat, 10 May 2025 17:51:23 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=7779f9b69b
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-7779f9b69b
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n9npx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-n9npx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  9m36s                  default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-tq4tr to default-k8s-diff-port-676255
Warning  Failed     5m58s (x4 over 9m1s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    4m31s (x5 over 9m36s)  kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     4m (x5 over 9m1s)      kubelet            Error: ErrImagePull
Warning  Failed     4m                     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m57s (x16 over 9m1s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    112s (x21 over 9m1s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-676255 logs kubernetes-dashboard-7779f9b69b-tq4tr -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-676255 logs kubernetes-dashboard-7779f9b69b-tq4tr -n kubernetes-dashboard: exit status 1 (68.971838ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-7779f9b69b-tq4tr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-676255 logs kubernetes-dashboard-7779f9b69b-tq4tr -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-676255
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-676255:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "55e52f167cfa5102ec6f7202bb5267477654c6defa3da173ba13197c4ad08a42",
	        "Created": "2025-05-10T17:50:07.037978917Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1049059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-05-10T17:51:06.553322475Z",
	            "FinishedAt": "2025-05-10T17:51:05.137009895Z"
	        },
	        "Image": "sha256:e9e814e304601d171cd7a05fe946703c6fbd63c3e77415c5bcfe31c3cddbbe5f",
	        "ResolvConfPath": "/var/lib/docker/containers/55e52f167cfa5102ec6f7202bb5267477654c6defa3da173ba13197c4ad08a42/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/55e52f167cfa5102ec6f7202bb5267477654c6defa3da173ba13197c4ad08a42/hostname",
	        "HostsPath": "/var/lib/docker/containers/55e52f167cfa5102ec6f7202bb5267477654c6defa3da173ba13197c4ad08a42/hosts",
	        "LogPath": "/var/lib/docker/containers/55e52f167cfa5102ec6f7202bb5267477654c6defa3da173ba13197c4ad08a42/55e52f167cfa5102ec6f7202bb5267477654c6defa3da173ba13197c4ad08a42-json.log",
	        "Name": "/default-k8s-diff-port-676255",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-676255:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-676255",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "55e52f167cfa5102ec6f7202bb5267477654c6defa3da173ba13197c4ad08a42",
	                "LowerDir": "/var/lib/docker/overlay2/c9ec54734a7feddb0390966d849699a3799a8f795769ea69d03666c36131a50b-init/diff:/var/lib/docker/overlay2/d562a19931b28d74981554e3e67ffc7804c8c483ec96f024e40ef2be1bf23f73/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9ec54734a7feddb0390966d849699a3799a8f795769ea69d03666c36131a50b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9ec54734a7feddb0390966d849699a3799a8f795769ea69d03666c36131a50b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9ec54734a7feddb0390966d849699a3799a8f795769ea69d03666c36131a50b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-676255",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-676255/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-676255",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-676255",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-676255",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c3cc0b8dac914ea8c60a09224e24b56ee3cada2d7961ab187d7fd7457623144",
	            "SandboxKey": "/var/run/docker/netns/7c3cc0b8dac9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-676255": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:a7:b0:8c:49:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c98d5c048caae1ea71a2ab5aaa214a59875742935cdb12b5c62117591aa8de39",
	                    "EndpointID": "fe27426d6520b980d7550b47009f5bfafaf84cd4511452c95376b69af7395b3d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-676255",
	                        "55e52f167cfa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-676255 -n default-k8s-diff-port-676255
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-676255 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-676255 logs -n 25: (1.156524667s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-256321                                  | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-697935             | old-k8s-version-697935       | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-697935                              | old-k8s-version-697935       | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 17:50 UTC |                     |
	|         | default-k8s-diff-port-676255                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-058078                  | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-676255       | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 17:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-256321                 | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 17:51 UTC | 10 May 25 17:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-256321                                  | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 17:51 UTC | 10 May 25 17:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 17:51 UTC | 10 May 25 17:51 UTC |
	|         | default-k8s-diff-port-676255                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| image   | no-preload-058078 image list                           | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| delete  | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| start   | -p newest-cni-173135 --memory=2200 --alsologtostderr   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-173135             | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-173135                  | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-173135 --memory=2200 --alsologtostderr   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-173135 image list                           | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| delete  | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:52:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:52:39.942859 1062960 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:52:39.943098 1062960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:39.943129 1062960 out.go:358] Setting ErrFile to fd 2...
	I0510 17:52:39.943146 1062960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:39.943562 1062960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:52:39.944604 1062960 out.go:352] Setting JSON to false
	I0510 17:52:39.945997 1062960 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12907,"bootTime":1746886653,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:52:39.946130 1062960 start.go:140] virtualization: kvm guest
	I0510 17:52:39.948309 1062960 out.go:177] * [newest-cni-173135] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:52:39.949674 1062960 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:52:39.949716 1062960 notify.go:220] Checking for updates...
	I0510 17:52:39.952354 1062960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:52:39.953722 1062960 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:39.955058 1062960 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 17:52:39.956484 1062960 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:52:39.957799 1062960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:52:39.959587 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:39.960145 1062960 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:52:39.985577 1062960 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 17:52:39.985704 1062960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:52:40.035501 1062960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-05-10 17:52:40.02617924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:52:40.035611 1062960 docker.go:318] overlay module found
	I0510 17:52:40.037784 1062960 out.go:177] * Using the docker driver based on existing profile
	I0510 17:52:40.039108 1062960 start.go:304] selected driver: docker
	I0510 17:52:40.039123 1062960 start.go:908] validating driver "docker" against &{Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:40.039239 1062960 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:52:40.040135 1062960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:52:40.092965 1062960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-05-10 17:52:40.084143213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:52:40.093291 1062960 start_flags.go:994] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0510 17:52:40.093320 1062960 cni.go:84] Creating CNI manager for ""
	I0510 17:52:40.093383 1062960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 17:52:40.093421 1062960 start.go:347] cluster config:
	{Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:40.096146 1062960 out.go:177] * Starting "newest-cni-173135" primary control-plane node in "newest-cni-173135" cluster
	I0510 17:52:40.097483 1062960 cache.go:121] Beginning downloading kic base image for docker with crio
	I0510 17:52:40.098838 1062960 out.go:177] * Pulling base image v0.0.46-1746731792-20718 ...
	I0510 17:52:40.100016 1062960 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 17:52:40.100054 1062960 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 17:52:40.100073 1062960 cache.go:56] Caching tarball of preloaded images
	I0510 17:52:40.100128 1062960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local docker daemon
	I0510 17:52:40.100157 1062960 preload.go:172] Found /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 17:52:40.100165 1062960 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 17:52:40.100261 1062960 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/config.json ...
	I0510 17:52:40.120688 1062960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local docker daemon, skipping pull
	I0510 17:52:40.120714 1062960 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 exists in daemon, skipping load
	I0510 17:52:40.120734 1062960 cache.go:230] Successfully downloaded all kic artifacts
	I0510 17:52:40.120784 1062960 start.go:360] acquireMachinesLock for newest-cni-173135: {Name:mk75975d6daf4063f8ba79544d03229010ceb1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:52:40.120860 1062960 start.go:364] duration metric: took 50.497µs to acquireMachinesLock for "newest-cni-173135"
	I0510 17:52:40.120885 1062960 start.go:96] Skipping create...Using existing machine configuration
	I0510 17:52:40.120892 1062960 fix.go:54] fixHost starting: 
	I0510 17:52:40.121107 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:40.139354 1062960 fix.go:112] recreateIfNeeded on newest-cni-173135: state=Stopped err=<nil>
	W0510 17:52:40.139386 1062960 fix.go:138] unexpected machine state, will restart: <nil>
	I0510 17:52:40.141294 1062960 out.go:177] * Restarting existing docker container for "newest-cni-173135" ...
	W0510 17:52:39.629875 1044308 pod_ready.go:104] pod "etcd-old-k8s-version-697935" is not "Ready", error: <nil>
	W0510 17:52:41.630228 1044308 pod_ready.go:104] pod "etcd-old-k8s-version-697935" is not "Ready", error: <nil>
	I0510 17:52:43.131391 1044308 pod_ready.go:94] pod "etcd-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.131443 1044308 pod_ready.go:86] duration metric: took 50.006172737s for pod "etcd-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.134286 1044308 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.138012 1044308 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.138036 1044308 pod_ready.go:86] duration metric: took 3.724234ms for pod "kube-apiserver-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.140268 1044308 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.143330 1044308 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.143350 1044308 pod_ready.go:86] duration metric: took 3.063093ms for pod "kube-controller-manager-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.145633 1044308 pod_ready.go:83] waiting for pod "kube-proxy-8tdw4" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.329167 1044308 pod_ready.go:94] pod "kube-proxy-8tdw4" is "Ready"
	I0510 17:52:43.329196 1044308 pod_ready.go:86] duration metric: took 183.5398ms for pod "kube-proxy-8tdw4" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.529673 1044308 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.929860 1044308 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.929890 1044308 pod_ready.go:86] duration metric: took 400.187942ms for pod "kube-scheduler-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.929904 1044308 pod_ready.go:40] duration metric: took 1m22.819056587s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 17:52:43.974390 1044308 start.go:607] kubectl: 1.33.0, cluster: 1.20.0 (minor skew: 13)
	I0510 17:52:43.975971 1044308 out.go:201] 
	W0510 17:52:43.977399 1044308 out.go:270] ! /usr/local/bin/kubectl is version 1.33.0, which may have incompatibilities with Kubernetes 1.20.0.
	I0510 17:52:43.978880 1044308 out.go:177]   - Want kubectl v1.20.0? Try 'minikube kubectl -- get pods -A'
	I0510 17:52:43.980215 1044308 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-697935" cluster and "default" namespace by default
	I0510 17:52:40.142629 1062960 cli_runner.go:164] Run: docker start newest-cni-173135
	I0510 17:52:40.387277 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:40.406155 1062960 kic.go:430] container "newest-cni-173135" state is running.
	I0510 17:52:40.406603 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:40.425434 1062960 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/config.json ...
	I0510 17:52:40.425733 1062960 machine.go:93] provisionDockerMachine start ...
	I0510 17:52:40.425813 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:40.446701 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:40.446942 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:40.446954 1062960 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 17:52:40.447629 1062960 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38662->127.0.0.1:33504: read: connection reset by peer
	I0510 17:52:43.567334 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-173135
	
	I0510 17:52:43.567369 1062960 ubuntu.go:169] provisioning hostname "newest-cni-173135"
	I0510 17:52:43.567474 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:43.585810 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:43.586092 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:43.586114 1062960 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-173135 && echo "newest-cni-173135" | sudo tee /etc/hostname
	I0510 17:52:43.720075 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-173135
	
	I0510 17:52:43.720180 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:43.738458 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:43.738683 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:43.738700 1062960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-173135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-173135/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-173135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 17:52:43.860357 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 17:52:43.860392 1062960 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20720-722920/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-722920/.minikube}
	I0510 17:52:43.860425 1062960 ubuntu.go:177] setting up certificates
	I0510 17:52:43.860438 1062960 provision.go:84] configureAuth start
	I0510 17:52:43.860501 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:43.878837 1062960 provision.go:143] copyHostCerts
	I0510 17:52:43.878913 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem, removing ...
	I0510 17:52:43.878934 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem
	I0510 17:52:43.879010 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem (1078 bytes)
	I0510 17:52:43.879140 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem, removing ...
	I0510 17:52:43.879154 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem
	I0510 17:52:43.879187 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem (1123 bytes)
	I0510 17:52:43.879281 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem, removing ...
	I0510 17:52:43.879293 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem
	I0510 17:52:43.879328 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem (1675 bytes)
	I0510 17:52:43.879447 1062960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem org=jenkins.newest-cni-173135 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-173135]
	I0510 17:52:44.399990 1062960 provision.go:177] copyRemoteCerts
	I0510 17:52:44.400060 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 17:52:44.400097 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.417363 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:44.509498 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 17:52:44.533816 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0510 17:52:44.556664 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 17:52:44.579844 1062960 provision.go:87] duration metric: took 719.387116ms to configureAuth
	I0510 17:52:44.579874 1062960 ubuntu.go:193] setting minikube options for container-runtime
	I0510 17:52:44.580082 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:44.580225 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.597779 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:44.597997 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:44.598015 1062960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 17:52:44.861571 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 17:52:44.861603 1062960 machine.go:96] duration metric: took 4.435849898s to provisionDockerMachine
	I0510 17:52:44.861615 1062960 start.go:293] postStartSetup for "newest-cni-173135" (driver="docker")
	I0510 17:52:44.861633 1062960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 17:52:44.861696 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 17:52:44.861741 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.880393 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:44.968863 1062960 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 17:52:44.972444 1062960 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0510 17:52:44.972471 1062960 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0510 17:52:44.972479 1062960 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0510 17:52:44.972486 1062960 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0510 17:52:44.972499 1062960 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/addons for local assets ...
	I0510 17:52:44.972551 1062960 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/files for local assets ...
	I0510 17:52:44.972632 1062960 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem -> 7298152.pem in /etc/ssl/certs
	I0510 17:52:44.972715 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 17:52:44.981250 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem --> /etc/ssl/certs/7298152.pem (1708 bytes)
	I0510 17:52:45.004513 1062960 start.go:296] duration metric: took 142.88043ms for postStartSetup
	I0510 17:52:45.004636 1062960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 17:52:45.004699 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.022563 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.108643 1062960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0510 17:52:45.113165 1062960 fix.go:56] duration metric: took 4.992266927s for fixHost
	I0510 17:52:45.113190 1062960 start.go:83] releasing machines lock for "newest-cni-173135", held for 4.992317581s
	I0510 17:52:45.113270 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:45.130656 1062960 ssh_runner.go:195] Run: cat /version.json
	I0510 17:52:45.130728 1062960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 17:52:45.130785 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.130732 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.149250 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.153557 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.235894 1062960 ssh_runner.go:195] Run: systemctl --version
	I0510 17:52:45.328928 1062960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 17:52:45.467882 1062960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0510 17:52:45.472485 1062960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 17:52:45.480914 1062960 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0510 17:52:45.480989 1062960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 17:52:45.489392 1062960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0510 17:52:45.489423 1062960 start.go:495] detecting cgroup driver to use...
	I0510 17:52:45.489464 1062960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0510 17:52:45.489535 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 17:52:45.501274 1062960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 17:52:45.512452 1062960 docker.go:225] disabling cri-docker service (if available) ...
	I0510 17:52:45.512528 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 17:52:45.524828 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 17:52:45.535636 1062960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 17:52:45.618303 1062960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 17:52:45.695586 1062960 docker.go:241] disabling docker service ...
	I0510 17:52:45.695664 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 17:52:45.707968 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 17:52:45.719029 1062960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 17:52:45.800197 1062960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 17:52:45.887455 1062960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 17:52:45.898860 1062960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 17:52:45.914760 1062960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 17:52:45.914818 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.924202 1062960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 17:52:45.924260 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.933839 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.944911 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.954202 1062960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 17:52:45.962950 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.972583 1062960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.981599 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.991016 1062960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 17:52:45.999017 1062960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 17:52:46.007316 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:46.090516 1062960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 17:52:46.208208 1062960 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 17:52:46.208290 1062960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 17:52:46.212169 1062960 start.go:563] Will wait 60s for crictl version
	I0510 17:52:46.212233 1062960 ssh_runner.go:195] Run: which crictl
	I0510 17:52:46.215714 1062960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 17:52:46.250179 1062960 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0510 17:52:46.250256 1062960 ssh_runner.go:195] Run: crio --version
	I0510 17:52:46.286288 1062960 ssh_runner.go:195] Run: crio --version
	I0510 17:52:46.324763 1062960 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.24.6 ...
	I0510 17:52:46.326001 1062960 cli_runner.go:164] Run: docker network inspect newest-cni-173135 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0510 17:52:46.342321 1062960 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0510 17:52:46.346220 1062960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:52:46.358987 1062960 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0510 17:52:46.360438 1062960 kubeadm.go:875] updating cluster {Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 17:52:46.360585 1062960 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 17:52:46.360654 1062960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:52:46.402300 1062960 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 17:52:46.402322 1062960 crio.go:433] Images already preloaded, skipping extraction
	I0510 17:52:46.402371 1062960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:52:46.438279 1062960 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 17:52:46.438310 1062960 cache_images.go:84] Images are preloaded, skipping loading
	I0510 17:52:46.438321 1062960 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.33.0 crio true true} ...
	I0510 17:52:46.438480 1062960 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-173135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 17:52:46.438582 1062960 ssh_runner.go:195] Run: crio config
	I0510 17:52:46.483257 1062960 cni.go:84] Creating CNI manager for ""
	I0510 17:52:46.483281 1062960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 17:52:46.483292 1062960 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0510 17:52:46.483315 1062960 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-173135 NodeName:newest-cni-173135 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 17:52:46.483479 1062960 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-173135"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 17:52:46.483553 1062960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 17:52:46.492414 1062960 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 17:52:46.492500 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 17:52:46.501119 1062960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0510 17:52:46.518140 1062960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 17:52:46.535112 1062960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0510 17:52:46.551871 1062960 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0510 17:52:46.555171 1062960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:52:46.565729 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:46.652845 1062960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:52:46.666063 1062960 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135 for IP: 192.168.94.2
	I0510 17:52:46.666087 1062960 certs.go:194] generating shared ca certs ...
	I0510 17:52:46.666108 1062960 certs.go:226] acquiring lock for ca certs: {Name:mk27922925b9822e089551ad68cc2984cd622bc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:46.666267 1062960 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key
	I0510 17:52:46.666346 1062960 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key
	I0510 17:52:46.666367 1062960 certs.go:256] generating profile certs ...
	I0510 17:52:46.666488 1062960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/client.key
	I0510 17:52:46.666575 1062960 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.key.eac5560e
	I0510 17:52:46.666638 1062960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.key
	I0510 17:52:46.666788 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815.pem (1338 bytes)
	W0510 17:52:46.666836 1062960 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815_empty.pem, impossibly tiny 0 bytes
	I0510 17:52:46.666855 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 17:52:46.666891 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem (1078 bytes)
	I0510 17:52:46.666924 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem (1123 bytes)
	I0510 17:52:46.666954 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem (1675 bytes)
	I0510 17:52:46.667014 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem (1708 bytes)
	I0510 17:52:46.667736 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 17:52:46.694046 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0510 17:52:46.720567 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 17:52:46.750803 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0510 17:52:46.783126 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0510 17:52:46.861172 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 17:52:46.886437 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 17:52:46.909743 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 17:52:46.932746 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem --> /usr/share/ca-certificates/7298152.pem (1708 bytes)
	I0510 17:52:46.955864 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 17:52:46.978875 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815.pem --> /usr/share/ca-certificates/729815.pem (1338 bytes)
	I0510 17:52:47.001846 1062960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 17:52:47.018936 1062960 ssh_runner.go:195] Run: openssl version
	I0510 17:52:47.024207 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 17:52:47.033345 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.036756 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 16:54 /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.036814 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.043306 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 17:52:47.051810 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/729815.pem && ln -fs /usr/share/ca-certificates/729815.pem /etc/ssl/certs/729815.pem"
	I0510 17:52:47.060972 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.064315 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 17:06 /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.064361 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.070986 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/729815.pem /etc/ssl/certs/51391683.0"
	I0510 17:52:47.079952 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7298152.pem && ln -fs /usr/share/ca-certificates/7298152.pem /etc/ssl/certs/7298152.pem"
	I0510 17:52:47.089676 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.093441 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 17:06 /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.093504 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.100198 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7298152.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 17:52:47.108827 1062960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 17:52:47.112497 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 17:52:47.119081 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 17:52:47.125525 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 17:52:47.131948 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 17:52:47.138247 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 17:52:47.145052 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 17:52:47.152189 1062960 kubeadm.go:392] StartCluster: {Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:47.152299 1062960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 17:52:47.152356 1062960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 17:52:47.190954 1062960 cri.go:89] found id: ""
	I0510 17:52:47.191057 1062960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 17:52:47.200662 1062960 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0510 17:52:47.200683 1062960 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0510 17:52:47.200729 1062960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0510 17:52:47.210371 1062960 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0510 17:52:47.211583 1062960 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-173135" does not appear in /home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:47.212205 1062960 kubeconfig.go:62] /home/jenkins/minikube-integration/20720-722920/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-173135" cluster setting kubeconfig missing "newest-cni-173135" context setting]
	I0510 17:52:47.213167 1062960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/kubeconfig: {Name:mk9fb87a04495b85d7d2d831cf7e181b64e065fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.215451 1062960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0510 17:52:47.225765 1062960 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0510 17:52:47.225809 1062960 kubeadm.go:593] duration metric: took 25.118512ms to restartPrimaryControlPlane
	I0510 17:52:47.225823 1062960 kubeadm.go:394] duration metric: took 73.645898ms to StartCluster
	I0510 17:52:47.225844 1062960 settings.go:142] acquiring lock: {Name:mkb5ef074e3901ac961cf1a29314fa6c725c1890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.225925 1062960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:47.227600 1062960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/kubeconfig: {Name:mk9fb87a04495b85d7d2d831cf7e181b64e065fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.227929 1062960 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 17:52:47.228146 1062960 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0510 17:52:47.228262 1062960 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-173135"
	I0510 17:52:47.228286 1062960 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-173135"
	W0510 17:52:47.228300 1062960 addons.go:247] addon storage-provisioner should already be in state true
	I0510 17:52:47.228322 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:47.228340 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.228374 1062960 addons.go:69] Setting default-storageclass=true in profile "newest-cni-173135"
	I0510 17:52:47.228389 1062960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-173135"
	I0510 17:52:47.228696 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.228794 1062960 addons.go:69] Setting metrics-server=true in profile "newest-cni-173135"
	I0510 17:52:47.228819 1062960 addons.go:238] Setting addon metrics-server=true in "newest-cni-173135"
	W0510 17:52:47.228830 1062960 addons.go:247] addon metrics-server should already be in state true
	I0510 17:52:47.228871 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.228905 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.229098 1062960 addons.go:69] Setting dashboard=true in profile "newest-cni-173135"
	I0510 17:52:47.229122 1062960 addons.go:238] Setting addon dashboard=true in "newest-cni-173135"
	W0510 17:52:47.229131 1062960 addons.go:247] addon dashboard should already be in state true
	I0510 17:52:47.229160 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.229350 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.229636 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.231952 1062960 out.go:177] * Verifying Kubernetes components...
	I0510 17:52:47.233708 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:47.257836 1062960 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0510 17:52:47.259786 1062960 addons.go:238] Setting addon default-storageclass=true in "newest-cni-173135"
	W0510 17:52:47.259808 1062960 addons.go:247] addon default-storageclass should already be in state true
	I0510 17:52:47.259842 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.260502 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0510 17:52:47.260520 1062960 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0510 17:52:47.260587 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.260894 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.269485 1062960 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0510 17:52:47.270561 1062960 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0510 17:52:47.271826 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0510 17:52:47.271848 1062960 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0510 17:52:47.271913 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.273848 1062960 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 17:52:47.275490 1062960 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:52:47.275521 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0510 17:52:47.275721 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.287652 1062960 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:47.287676 1062960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0510 17:52:47.287737 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.300295 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.308088 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.314958 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.317183 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.570630 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:47.644300 1062960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:52:47.648111 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0510 17:52:47.648144 1062960 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0510 17:52:47.745020 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0510 17:52:47.745054 1062960 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0510 17:52:47.746206 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:52:47.753235 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0510 17:52:47.753267 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0510 17:52:47.852275 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0510 17:52:47.852309 1062960 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0510 17:52:47.854261 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0510 17:52:47.854291 1062960 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0510 17:52:47.957529 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0510 17:52:47.957561 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0510 17:52:47.962427 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:52:47.962453 1062960 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0510 17:52:47.967141 1062960 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0510 17:52:47.967185 1062960 retry.go:31] will retry after 329.411117ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0510 17:52:47.967271 1062960 api_server.go:52] waiting for apiserver process to appear ...
	I0510 17:52:47.967381 1062960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 17:52:48.055318 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0510 17:52:48.055400 1062960 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0510 17:52:48.060787 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:52:48.149914 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0510 17:52:48.149947 1062960 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0510 17:52:48.175035 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0510 17:52:48.175070 1062960 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0510 17:52:48.263718 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0510 17:52:48.263750 1062960 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0510 17:52:48.282195 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 17:52:48.282227 1062960 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0510 17:52:48.297636 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:48.359369 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 17:52:52.345196 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.598944537s)
	I0510 17:52:52.345534 1062960 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.378119806s)
	I0510 17:52:52.345610 1062960 api_server.go:72] duration metric: took 5.117639828s to wait for apiserver process to appear ...
	I0510 17:52:52.345622 1062960 api_server.go:88] waiting for apiserver healthz status ...
	I0510 17:52:52.345683 1062960 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0510 17:52:52.350659 1062960 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0510 17:52:52.350693 1062960 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0510 17:52:52.462305 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.401465129s)
	I0510 17:52:52.462425 1062960 addons.go:479] Verifying addon metrics-server=true in "newest-cni-173135"
	I0510 17:52:52.462366 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.164694895s)
	I0510 17:52:52.558877 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.199364581s)
	I0510 17:52:52.560719 1062960 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-173135 addons enable metrics-server
	
	I0510 17:52:52.562364 1062960 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0510 17:52:52.563698 1062960 addons.go:514] duration metric: took 5.33556927s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0510 17:52:52.846151 1062960 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0510 17:52:52.850590 1062960 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0510 17:52:52.851935 1062960 api_server.go:141] control plane version: v1.33.0
	I0510 17:52:52.851968 1062960 api_server.go:131] duration metric: took 506.335848ms to wait for apiserver health ...
	I0510 17:52:52.851979 1062960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 17:52:52.855964 1062960 system_pods.go:59] 9 kube-system pods found
	I0510 17:52:52.856013 1062960 system_pods.go:61] "coredns-674b8bbfcf-l2m27" [11b63e72-35af-4a70-a7d3-b11e18104e2e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856039 1062960 system_pods.go:61] "etcd-newest-cni-173135" [60c35044-778d-45d4-8d96-e58efbd9b54b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 17:52:52.856062 1062960 system_pods.go:61] "kindnet-5nzlt" [9158a53c-5cd1-426c-a255-37618e292899] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0510 17:52:52.856073 1062960 system_pods.go:61] "kube-apiserver-newest-cni-173135" [790eeefa-f593-4148-b5f3-43bf9807166f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 17:52:52.856085 1062960 system_pods.go:61] "kube-controller-manager-newest-cni-173135" [75bdb232-66d8-442a-8566-34a3d4674876] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 17:52:52.856096 1062960 system_pods.go:61] "kube-proxy-v2tt7" [e502d755-4ecb-4567-9259-547f7c063830] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0510 17:52:52.856108 1062960 system_pods.go:61] "kube-scheduler-newest-cni-173135" [8bfc0953-197d-4185-b2e7-6e1a2d97a8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 17:52:52.856117 1062960 system_pods.go:61] "metrics-server-f79f97bbb-z4g7z" [a6bcfd5e-6f32-43ef-a6e7-336c90faf9ff] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856125 1062960 system_pods.go:61] "storage-provisioner" [effda141-cd8d-4f87-97a1-9166c59e1de0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856132 1062960 system_pods.go:74] duration metric: took 4.146105ms to wait for pod list to return data ...
	I0510 17:52:52.856143 1062960 default_sa.go:34] waiting for default service account to be created ...
	I0510 17:52:52.858633 1062960 default_sa.go:45] found service account: "default"
	I0510 17:52:52.858658 1062960 default_sa.go:55] duration metric: took 2.507165ms for default service account to be created ...
	I0510 17:52:52.858670 1062960 kubeadm.go:578] duration metric: took 5.630701473s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0510 17:52:52.858701 1062960 node_conditions.go:102] verifying NodePressure condition ...
	I0510 17:52:52.861375 1062960 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0510 17:52:52.861398 1062960 node_conditions.go:123] node cpu capacity is 8
	I0510 17:52:52.861411 1062960 node_conditions.go:105] duration metric: took 2.704535ms to run NodePressure ...
	I0510 17:52:52.861422 1062960 start.go:241] waiting for startup goroutines ...
	I0510 17:52:52.861431 1062960 start.go:246] waiting for cluster config update ...
	I0510 17:52:52.861444 1062960 start.go:255] writing updated cluster config ...
	I0510 17:52:52.861692 1062960 ssh_runner.go:195] Run: rm -f paused
	I0510 17:52:52.918445 1062960 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 17:52:52.920711 1062960 out.go:177] * Done! kubectl is now configured to use "newest-cni-173135" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 10 17:59:35 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 17:59:35.884518808Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=74c11d9d-250b-48b8-946d-5a6d03b34eba name=/runtime.v1.ImageService/ImageStatus
	May 10 17:59:38 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 17:59:38.884506869Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=16eb532b-0ae5-4a3b-8a2e-a11d38fa9f7a name=/runtime.v1.ImageService/ImageStatus
	May 10 17:59:38 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 17:59:38.884744149Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=16eb532b-0ae5-4a3b-8a2e-a11d38fa9f7a name=/runtime.v1.ImageService/ImageStatus
	May 10 17:59:46 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 17:59:46.883658300Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3420aab5-f38f-4b92-8fbf-3d806af3c6bf name=/runtime.v1.ImageService/ImageStatus
	May 10 17:59:46 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 17:59:46.884013288Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=3420aab5-f38f-4b92-8fbf-3d806af3c6bf name=/runtime.v1.ImageService/ImageStatus
	May 10 17:59:46 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 17:59:46.884691850Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=9ea36367-de32-40ca-8c5e-e07168e0c025 name=/runtime.v1.ImageService/PullImage
	May 10 17:59:46 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 17:59:46.885742455Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 17:59:51 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 17:59:51.884588173Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=3b5c049b-c045-4aee-b0cc-f5b47325b0e0 name=/runtime.v1.ImageService/ImageStatus
	May 10 17:59:51 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 17:59:51.884898113Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=3b5c049b-c045-4aee-b0cc-f5b47325b0e0 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:05 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:05.885150786Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=4e7799e7-7164-436b-83e8-517da519ceb5 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:05 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:05.885373350Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4e7799e7-7164-436b-83e8-517da519ceb5 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:17 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:17.884055788Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=39648b80-efd3-4b14-a1f7-5a1b28012f18 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:17 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:17.884328889Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=39648b80-efd3-4b14-a1f7-5a1b28012f18 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:28 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:28.883848726Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8e3069ee-d103-4b98-a10a-446d0c62e694 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:28 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:28.884087326Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8e3069ee-d103-4b98-a10a-446d0c62e694 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:30 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:30.884272418Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=eaf9914a-949e-4f56-9d23-1187548fff20 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:30 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:30.884638334Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=eaf9914a-949e-4f56-9d23-1187548fff20 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:42 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:42.884674498Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=69f1391b-7a49-46ff-857d-c6af7ec8cabb name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:42 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:42.885025869Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=69f1391b-7a49-46ff-857d-c6af7ec8cabb name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:44 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:44.884238727Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1c1e92fd-e77d-4bd8-9798-9a3da925e911 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:44 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:44.884534697Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1c1e92fd-e77d-4bd8-9798-9a3da925e911 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:55 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:55.884149124Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3dafeed3-1c20-4d9c-ad90-295ea7a4785c name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:55 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:55.884494164Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=3dafeed3-1c20-4d9c-ad90-295ea7a4785c name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:57 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:57.884298997Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=acb72442-2e4a-4d64-acbe-2f5d307e1b64 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:00:57 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:00:57.884551049Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=acb72442-2e4a-4d64-acbe-2f5d307e1b64 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	e6c0360f4ce16       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   6c46559b9197b       dashboard-metrics-scraper-86c6bf9756-zj28d
	d78df9c428b8f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   69fec956f4437       storage-provisioner
	c69d9fdd9ca0e       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   9 minutes ago       Running             coredns                     1                   a29be636294cc       coredns-674b8bbfcf-lv75k
	ca345a87b4c84       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f   9 minutes ago       Running             kindnet-cni                 1                   d3c4cfbf8e617       kindnet-g27zc
	84cfd522b5eb4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   69fec956f4437       storage-provisioner
	85b8082e0d4eb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   ea407ebdf019d       busybox
	12bbd396a9677       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68   9 minutes ago       Running             kube-proxy                  1                   810de448710f2       kube-proxy-hfrsv
	c5614524272b1       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4   9 minutes ago       Running             kube-scheduler              1                   d8a015ee04be7       kube-scheduler-default-k8s-diff-port-676255
	8b6a9b2c8306c       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02   9 minutes ago       Running             kube-controller-manager     1                   d6a819fee72e3       kube-controller-manager-default-k8s-diff-port-676255
	4fc9c98394541       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4   9 minutes ago       Running             kube-apiserver              1                   e43cd8380104e       kube-apiserver-default-k8s-diff-port-676255
	23663520f51e3       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   9 minutes ago       Running             etcd                        1                   f4a8c26370e39       etcd-default-k8s-diff-port-676255
	
	
	==> coredns [c69d9fdd9ca0e75c02f7f9679695858d9d3833cad45c36ca2b31e62a02e4d695] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:39968 - 58670 "HINFO IN 1433296294397684971.295867134072264484. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.052083969s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-676255
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-676255
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=default-k8s-diff-port-676255
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T17_50_24_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 17:50:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-676255
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 18:00:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 17:56:55 +0000   Sat, 10 May 2025 17:50:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 17:56:55 +0000   Sat, 10 May 2025 17:50:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 17:56:55 +0000   Sat, 10 May 2025 17:50:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 17:56:55 +0000   Sat, 10 May 2025 17:50:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-676255
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0de183547024632ba286ea91266c983
	  System UUID:                4910076d-027f-4fc1-91a0-466c135c9938
	  Boot ID:                    cf43504f-fb83-4d4b-9ff6-27d975437043
	  Kernel Version:             5.15.0-1081-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-674b8bbfcf-lv75k                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-default-k8s-diff-port-676255                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-g27zc                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-default-k8s-diff-port-676255             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-676255    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-hfrsv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-default-k8s-diff-port-676255             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-f79f97bbb-xxd6x                          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-zj28d              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-tq4tr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 9m39s                  kube-proxy       
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node default-k8s-diff-port-676255 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node default-k8s-diff-port-676255 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node default-k8s-diff-port-676255 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           10m                    node-controller  Node default-k8s-diff-port-676255 event: Registered Node default-k8s-diff-port-676255 in Controller
	  Normal   NodeReady                10m                    kubelet          Node default-k8s-diff-port-676255 status is now: NodeReady
	  Normal   Starting                 9m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m48s (x8 over 9m48s)  kubelet          Node default-k8s-diff-port-676255 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m48s (x8 over 9m48s)  kubelet          Node default-k8s-diff-port-676255 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m48s (x8 over 9m48s)  kubelet          Node default-k8s-diff-port-676255 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m38s                  node-controller  Node default-k8s-diff-port-676255 event: Registered Node default-k8s-diff-port-676255 in Controller
	
	
	==> dmesg <==
	[  +1.019813] net_ratelimit: 3 callbacks suppressed
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000003] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000002] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +4.095573] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000007] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000001] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000002] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +3.075626] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000001] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +0.000001] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000002] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +1.019906] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000006] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	
	
	==> etcd [23663520f51e3c0d2c766772ab95952e2566e29f8c574114752f6ec472da9202] <==
	{"level":"info","ts":"2025-05-10T17:51:14.986038Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-05-10T17:51:14.988560Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-05-10T17:51:14.986129Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-05-10T17:51:14.988596Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-05-10T17:51:14.988611Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-05-10T17:51:14.988777Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-05-10T17:51:14.988872Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-05-10T17:51:14.988998Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T17:51:14.989058Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T17:51:16.751992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-05-10T17:51:16.752165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T17:51:16.752246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-05-10T17:51:16.752305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.752385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.752441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.752480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.773304Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-676255 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:51:16.773452Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:51:16.773474Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:51:16.774750Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:51:16.773722Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:51:16.775543Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:51:16.775756Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:51:16.776030Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:51:16.776726Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 18:01:01 up  3:43,  0 users,  load average: 0.91, 1.21, 3.40
	Linux default-k8s-diff-port-676255 5.15.0-1081-gcp #90~20.04.1-Ubuntu SMP Fri Apr 4 18:55:17 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [ca345a87b4c84411cddfb20cabc81f58266e9f50c474f8cbfe49db03041191b8] <==
	I0510 17:58:51.349052       1 main.go:301] handling current node
	I0510 17:59:01.348321       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 17:59:01.348353       1 main.go:301] handling current node
	I0510 17:59:11.351701       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 17:59:11.351743       1 main.go:301] handling current node
	I0510 17:59:21.348567       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 17:59:21.348612       1 main.go:301] handling current node
	I0510 17:59:31.348766       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 17:59:31.348802       1 main.go:301] handling current node
	I0510 17:59:41.356069       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 17:59:41.356108       1 main.go:301] handling current node
	I0510 17:59:51.351527       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 17:59:51.351570       1 main.go:301] handling current node
	I0510 18:00:01.348737       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:00:01.348793       1 main.go:301] handling current node
	I0510 18:00:11.356336       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:00:11.356375       1 main.go:301] handling current node
	I0510 18:00:21.349118       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:00:21.349155       1 main.go:301] handling current node
	I0510 18:00:31.348644       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:00:31.348685       1 main.go:301] handling current node
	I0510 18:00:41.355531       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:00:41.355565       1 main.go:301] handling current node
	I0510 18:00:51.354542       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:00:51.354578       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4fc9c9839454111ce0acea94e13826a65df3ab1fd0d76bc66399621e014e91bd] <==
	E0510 17:56:19.775986       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0510 17:56:19.776060       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0510 17:56:19.777099       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 17:56:19.777118       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0510 17:57:19.777529       1 handler_proxy.go:99] no RequestInfo found in the context
	W0510 17:57:19.777583       1 handler_proxy.go:99] no RequestInfo found in the context
	E0510 17:57:19.777587       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0510 17:57:19.777670       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0510 17:57:19.778724       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 17:57:19.778785       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0510 17:59:19.779406       1 handler_proxy.go:99] no RequestInfo found in the context
	W0510 17:59:19.779470       1 handler_proxy.go:99] no RequestInfo found in the context
	E0510 17:59:19.779518       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0510 17:59:19.779542       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0510 17:59:19.781590       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 17:59:19.781616       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8b6a9b2c8306c1d2fb0fc2a82a75f8469160a819fbad7fb3d2e438380d74986a] <==
	I0510 17:54:53.713332       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:55:23.294991       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:55:23.720140       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:55:53.301312       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:55:53.727190       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:56:23.307922       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:56:23.734095       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:56:53.313001       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:56:53.740613       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:57:23.318846       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:57:23.747553       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:57:53.323915       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:57:53.754628       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:58:23.330113       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:58:23.761521       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:58:53.335548       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:58:53.769282       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:59:23.341190       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:59:23.776684       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 17:59:53.347692       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 17:59:53.783256       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:00:23.353744       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:00:23.790158       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:00:53.358823       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:00:53.797338       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [12bbd396a96776e2aa4a9fdafd487d693b6f6ae41b20eb30bb7af563e6f9da7c] <==
	I0510 17:51:20.673329       1 server_linux.go:63] "Using iptables proxy"
	I0510 17:51:21.008748       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.85.2"]
	E0510 17:51:21.008832       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:51:21.112762       1 server.go:254] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0510 17:51:21.112825       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:51:21.160961       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:51:21.180186       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:51:21.180237       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:51:21.181926       1 config.go:199] "Starting service config controller"
	I0510 17:51:21.182017       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:51:21.184449       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:51:21.187346       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:51:21.187405       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 17:51:21.186804       1 config.go:329] "Starting node config controller"
	I0510 17:51:21.188586       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:51:21.188656       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:51:21.188684       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:51:21.283006       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:51:21.289399       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:51:21.290199       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [c5614524272b1c26f34452e570f1198fae85debe604d4a3fa17071029baaa020] <==
	I0510 17:51:16.115763       1 serving.go:386] Generated self-signed cert in-memory
	W0510 17:51:18.759528       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0510 17:51:18.759653       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0510 17:51:18.759696       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0510 17:51:18.759735       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0510 17:51:19.115189       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 17:51:19.115237       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:51:19.167124       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 17:51:19.167523       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 17:51:19.167598       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:51:19.189540       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:51:19.299579       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	May 10 18:00:17 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:17.489251     810 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	May 10 18:00:17 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:17.489332     810 kuberuntime_image.go:42] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	May 10 18:00:17 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:17.489557     810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kubernetes-dashboard,Image:docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Command:[],Args:[--namespace=kubernetes-dashboard --enable-skip-login --disable-settings-authorizer],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9npx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 9090 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kubernetes-dashboard-7779f9b69b-tq4tr_kubernetes-dashboard(62e44ee1-f320-4a22-bf54-04c5efdd417e): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2
ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	May 10 18:00:17 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:17.490747     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-tq4tr" podUID="62e44ee1-f320-4a22-bf54-04c5efdd417e"
	May 10 18:00:17 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:17.884606     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-xxd6x" podUID="531862bb-0aa3-4428-acfb-19097f9436c9"
	May 10 18:00:23 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:23.944142     810 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900023943902240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:23 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:23.944184     810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900023943902240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:25 default-k8s-diff-port-676255 kubelet[810]: I0510 18:00:25.883472     810 scope.go:117] "RemoveContainer" containerID="e6c0360f4ce1675e4c4249d1da51f93fc22e2ac377cf87f469c3cec797babaa6"
	May 10 18:00:25 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:25.883759     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-zj28d_kubernetes-dashboard(41905a30-bc1c-4bc3-aec5-605250c6efb1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-zj28d" podUID="41905a30-bc1c-4bc3-aec5-605250c6efb1"
	May 10 18:00:28 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:28.884412     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-xxd6x" podUID="531862bb-0aa3-4428-acfb-19097f9436c9"
	May 10 18:00:30 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:30.885027     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-tq4tr" podUID="62e44ee1-f320-4a22-bf54-04c5efdd417e"
	May 10 18:00:33 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:33.945446     810 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900033945213903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:33 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:33.945490     810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900033945213903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:39 default-k8s-diff-port-676255 kubelet[810]: I0510 18:00:39.883636     810 scope.go:117] "RemoveContainer" containerID="e6c0360f4ce1675e4c4249d1da51f93fc22e2ac377cf87f469c3cec797babaa6"
	May 10 18:00:39 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:39.883902     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-zj28d_kubernetes-dashboard(41905a30-bc1c-4bc3-aec5-605250c6efb1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-zj28d" podUID="41905a30-bc1c-4bc3-aec5-605250c6efb1"
	May 10 18:00:42 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:42.885380     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-xxd6x" podUID="531862bb-0aa3-4428-acfb-19097f9436c9"
	May 10 18:00:43 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:43.946771     810 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900043946546996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:43 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:43.946812     810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900043946546996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:44 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:44.884876     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-tq4tr" podUID="62e44ee1-f320-4a22-bf54-04c5efdd417e"
	May 10 18:00:52 default-k8s-diff-port-676255 kubelet[810]: I0510 18:00:52.883435     810 scope.go:117] "RemoveContainer" containerID="e6c0360f4ce1675e4c4249d1da51f93fc22e2ac377cf87f469c3cec797babaa6"
	May 10 18:00:52 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:52.883689     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-zj28d_kubernetes-dashboard(41905a30-bc1c-4bc3-aec5-605250c6efb1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-zj28d" podUID="41905a30-bc1c-4bc3-aec5-605250c6efb1"
	May 10 18:00:53 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:53.948623     810 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900053948425174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:53 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:53.948665     810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900053948425174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:00:55 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:55.884789     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-tq4tr" podUID="62e44ee1-f320-4a22-bf54-04c5efdd417e"
	May 10 18:00:57 default-k8s-diff-port-676255 kubelet[810]: E0510 18:00:57.884913     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-xxd6x" podUID="531862bb-0aa3-4428-acfb-19097f9436c9"
	
	
	==> storage-provisioner [84cfd522b5eb4f451a059a4b94f09aa492445664302f769cd7550687083f819e] <==
	I0510 17:51:20.560456       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0510 17:51:50.563480       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d78df9c428b8fba424429e774d336b320abe9ad03549c02406d1429438773830] <==
	W0510 18:00:36.545153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:38.548493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:38.552157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:40.555933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:40.560961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:42.564748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:42.568836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:44.571687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:44.575761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:46.579214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:46.583680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:48.586892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:48.591153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:50.594167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:50.598005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:52.601463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:52.606801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:54.609760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:54.613763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:56.617066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:56.622342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:58.625730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:00:58.629643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:00.633226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:01:00.638259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-676255 -n default-k8s-diff-port-676255
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-676255 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-xxd6x kubernetes-dashboard-7779f9b69b-tq4tr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-676255 describe pod metrics-server-f79f97bbb-xxd6x kubernetes-dashboard-7779f9b69b-tq4tr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-676255 describe pod metrics-server-f79f97bbb-xxd6x kubernetes-dashboard-7779f9b69b-tq4tr: exit status 1 (57.051172ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-xxd6x" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-tq4tr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-676255 describe pod metrics-server-f79f97bbb-xxd6x kubernetes-dashboard-7779f9b69b-tq4tr: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-6bj6d" [8dfa2561-0fd4-4df5-93e1-f807fe41266a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0510 17:52:44.667602  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:44.674050  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:44.685517  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:44.706947  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:44.748367  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:44.829832  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:44.991588  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:45.313516  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:45.955519  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:47.237406  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:47.933625  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:48.166280  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:49.799105  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-697935 -n old-k8s-version-697935
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-05-10 18:01:44.608207761 +0000 UTC m=+4044.289994895
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-697935 describe po kubernetes-dashboard-cd95d586-6bj6d -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context old-k8s-version-697935 describe po kubernetes-dashboard-cd95d586-6bj6d -n kubernetes-dashboard:
Name:             kubernetes-dashboard-cd95d586-6bj6d
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-697935/192.168.103.2
Start Time:       Sat, 10 May 2025 17:51:32 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=cd95d586
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-cd95d586
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-lrxrc (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kubernetes-dashboard-token-lrxrc:
Type:        Secret (a volume populated by a Secret)
SecretName:  kubernetes-dashboard-token-lrxrc
Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-cd95d586-6bj6d to old-k8s-version-697935
Warning  Failed     8m50s                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m (x4 over 10m)        kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     6m29s (x3 over 9m36s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m29s (x4 over 9m36s)   kubelet            Error: ErrImagePull
Normal   BackOff    6m5s (x7 over 9m35s)    kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     5m11s (x11 over 9m35s)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-697935 logs kubernetes-dashboard-cd95d586-6bj6d -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context old-k8s-version-697935 logs kubernetes-dashboard-cd95d586-6bj6d -n kubernetes-dashboard: exit status 1 (79.487106ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-cd95d586-6bj6d" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context old-k8s-version-697935 logs kubernetes-dashboard-cd95d586-6bj6d -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-697935
helpers_test.go:235: (dbg) docker inspect old-k8s-version-697935:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1",
	        "Created": "2025-05-10T17:48:25.557404666Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1044519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-05-10T17:50:53.432208071Z",
	            "FinishedAt": "2025-05-10T17:50:52.531319087Z"
	        },
	        "Image": "sha256:e9e814e304601d171cd7a05fe946703c6fbd63c3e77415c5bcfe31c3cddbbe5f",
	        "ResolvConfPath": "/var/lib/docker/containers/eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1/hostname",
	        "HostsPath": "/var/lib/docker/containers/eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1/hosts",
	        "LogPath": "/var/lib/docker/containers/eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1/eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1-json.log",
	        "Name": "/old-k8s-version-697935",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-697935:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-697935",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1",
	                "LowerDir": "/var/lib/docker/overlay2/a8bd73192116b138eaad2fa16c9fbfd3b433aef04c9a5c29d79f5127ccfb35d9-init/diff:/var/lib/docker/overlay2/d562a19931b28d74981554e3e67ffc7804c8c483ec96f024e40ef2be1bf23f73/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8bd73192116b138eaad2fa16c9fbfd3b433aef04c9a5c29d79f5127ccfb35d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8bd73192116b138eaad2fa16c9fbfd3b433aef04c9a5c29d79f5127ccfb35d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8bd73192116b138eaad2fa16c9fbfd3b433aef04c9a5c29d79f5127ccfb35d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-697935",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-697935/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-697935",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-697935",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-697935",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3258bf027cb8b69c815869d87c662acfa78f86254269772044555e9f22043439",
	            "SandboxKey": "/var/run/docker/netns/3258bf027cb8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-697935": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:83:4d:ad:3a:94",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec25a068cacdea5d21bc1a6d5632ec61740de3d163f84e29a86d0b23f4aa28df",
	                    "EndpointID": "67aa9a0dffc46ed701fed3e6000c482d56647c162de59793697cff54360d1d2d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-697935",
	                        "eb68cd4666de"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-697935 -n old-k8s-version-697935
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-697935 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-697935 logs -n 25: (1.29520656s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-256321                                  | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-697935             | old-k8s-version-697935       | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-697935                              | old-k8s-version-697935       | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 17:50 UTC |                     |
	|         | default-k8s-diff-port-676255                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-058078                  | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-676255       | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 17:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-256321                 | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 17:51 UTC | 10 May 25 17:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-256321                                  | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 17:51 UTC | 10 May 25 17:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 17:51 UTC | 10 May 25 17:51 UTC |
	|         | default-k8s-diff-port-676255                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| image   | no-preload-058078 image list                           | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| delete  | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| start   | -p newest-cni-173135 --memory=2200 --alsologtostderr   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-173135             | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-173135                  | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-173135 --memory=2200 --alsologtostderr   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-173135 image list                           | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| delete  | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:52:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:52:39.942859 1062960 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:52:39.943098 1062960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:39.943129 1062960 out.go:358] Setting ErrFile to fd 2...
	I0510 17:52:39.943146 1062960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:39.943562 1062960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:52:39.944604 1062960 out.go:352] Setting JSON to false
	I0510 17:52:39.945997 1062960 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12907,"bootTime":1746886653,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:52:39.946130 1062960 start.go:140] virtualization: kvm guest
	I0510 17:52:39.948309 1062960 out.go:177] * [newest-cni-173135] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:52:39.949674 1062960 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:52:39.949716 1062960 notify.go:220] Checking for updates...
	I0510 17:52:39.952354 1062960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:52:39.953722 1062960 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:39.955058 1062960 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 17:52:39.956484 1062960 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:52:39.957799 1062960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:52:39.959587 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:39.960145 1062960 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:52:39.985577 1062960 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 17:52:39.985704 1062960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:52:40.035501 1062960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-05-10 17:52:40.02617924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:52:40.035611 1062960 docker.go:318] overlay module found
	I0510 17:52:40.037784 1062960 out.go:177] * Using the docker driver based on existing profile
	I0510 17:52:40.039108 1062960 start.go:304] selected driver: docker
	I0510 17:52:40.039123 1062960 start.go:908] validating driver "docker" against &{Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:40.039239 1062960 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:52:40.040135 1062960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:52:40.092965 1062960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-05-10 17:52:40.084143213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:52:40.093291 1062960 start_flags.go:994] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0510 17:52:40.093320 1062960 cni.go:84] Creating CNI manager for ""
	I0510 17:52:40.093383 1062960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 17:52:40.093421 1062960 start.go:347] cluster config:
	{Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:40.096146 1062960 out.go:177] * Starting "newest-cni-173135" primary control-plane node in "newest-cni-173135" cluster
	I0510 17:52:40.097483 1062960 cache.go:121] Beginning downloading kic base image for docker with crio
	I0510 17:52:40.098838 1062960 out.go:177] * Pulling base image v0.0.46-1746731792-20718 ...
	I0510 17:52:40.100016 1062960 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 17:52:40.100054 1062960 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 17:52:40.100073 1062960 cache.go:56] Caching tarball of preloaded images
	I0510 17:52:40.100128 1062960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local docker daemon
	I0510 17:52:40.100157 1062960 preload.go:172] Found /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 17:52:40.100165 1062960 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 17:52:40.100261 1062960 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/config.json ...
	I0510 17:52:40.120688 1062960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local docker daemon, skipping pull
	I0510 17:52:40.120714 1062960 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 exists in daemon, skipping load
	I0510 17:52:40.120734 1062960 cache.go:230] Successfully downloaded all kic artifacts
	I0510 17:52:40.120784 1062960 start.go:360] acquireMachinesLock for newest-cni-173135: {Name:mk75975d6daf4063f8ba79544d03229010ceb1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:52:40.120860 1062960 start.go:364] duration metric: took 50.497µs to acquireMachinesLock for "newest-cni-173135"
	I0510 17:52:40.120885 1062960 start.go:96] Skipping create...Using existing machine configuration
	I0510 17:52:40.120892 1062960 fix.go:54] fixHost starting: 
	I0510 17:52:40.121107 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:40.139354 1062960 fix.go:112] recreateIfNeeded on newest-cni-173135: state=Stopped err=<nil>
	W0510 17:52:40.139386 1062960 fix.go:138] unexpected machine state, will restart: <nil>
	I0510 17:52:40.141294 1062960 out.go:177] * Restarting existing docker container for "newest-cni-173135" ...
	W0510 17:52:39.629875 1044308 pod_ready.go:104] pod "etcd-old-k8s-version-697935" is not "Ready", error: <nil>
	W0510 17:52:41.630228 1044308 pod_ready.go:104] pod "etcd-old-k8s-version-697935" is not "Ready", error: <nil>
	I0510 17:52:43.131391 1044308 pod_ready.go:94] pod "etcd-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.131443 1044308 pod_ready.go:86] duration metric: took 50.006172737s for pod "etcd-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.134286 1044308 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.138012 1044308 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.138036 1044308 pod_ready.go:86] duration metric: took 3.724234ms for pod "kube-apiserver-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.140268 1044308 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.143330 1044308 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.143350 1044308 pod_ready.go:86] duration metric: took 3.063093ms for pod "kube-controller-manager-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.145633 1044308 pod_ready.go:83] waiting for pod "kube-proxy-8tdw4" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.329167 1044308 pod_ready.go:94] pod "kube-proxy-8tdw4" is "Ready"
	I0510 17:52:43.329196 1044308 pod_ready.go:86] duration metric: took 183.5398ms for pod "kube-proxy-8tdw4" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.529673 1044308 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.929860 1044308 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.929890 1044308 pod_ready.go:86] duration metric: took 400.187942ms for pod "kube-scheduler-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.929904 1044308 pod_ready.go:40] duration metric: took 1m22.819056587s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 17:52:43.974390 1044308 start.go:607] kubectl: 1.33.0, cluster: 1.20.0 (minor skew: 13)
	I0510 17:52:43.975971 1044308 out.go:201] 
	W0510 17:52:43.977399 1044308 out.go:270] ! /usr/local/bin/kubectl is version 1.33.0, which may have incompatibilities with Kubernetes 1.20.0.
	I0510 17:52:43.978880 1044308 out.go:177]   - Want kubectl v1.20.0? Try 'minikube kubectl -- get pods -A'
	I0510 17:52:43.980215 1044308 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-697935" cluster and "default" namespace by default
	I0510 17:52:40.142629 1062960 cli_runner.go:164] Run: docker start newest-cni-173135
	I0510 17:52:40.387277 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:40.406155 1062960 kic.go:430] container "newest-cni-173135" state is running.
	I0510 17:52:40.406603 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:40.425434 1062960 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/config.json ...
	I0510 17:52:40.425733 1062960 machine.go:93] provisionDockerMachine start ...
	I0510 17:52:40.425813 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:40.446701 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:40.446942 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:40.446954 1062960 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 17:52:40.447629 1062960 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38662->127.0.0.1:33504: read: connection reset by peer
	I0510 17:52:43.567334 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-173135
	
	I0510 17:52:43.567369 1062960 ubuntu.go:169] provisioning hostname "newest-cni-173135"
	I0510 17:52:43.567474 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:43.585810 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:43.586092 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:43.586114 1062960 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-173135 && echo "newest-cni-173135" | sudo tee /etc/hostname
	I0510 17:52:43.720075 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-173135
	
	I0510 17:52:43.720180 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:43.738458 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:43.738683 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:43.738700 1062960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-173135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-173135/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-173135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 17:52:43.860357 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 17:52:43.860392 1062960 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20720-722920/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-722920/.minikube}
	I0510 17:52:43.860425 1062960 ubuntu.go:177] setting up certificates
	I0510 17:52:43.860438 1062960 provision.go:84] configureAuth start
	I0510 17:52:43.860501 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:43.878837 1062960 provision.go:143] copyHostCerts
	I0510 17:52:43.878913 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem, removing ...
	I0510 17:52:43.878934 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem
	I0510 17:52:43.879010 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem (1078 bytes)
	I0510 17:52:43.879140 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem, removing ...
	I0510 17:52:43.879154 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem
	I0510 17:52:43.879187 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem (1123 bytes)
	I0510 17:52:43.879281 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem, removing ...
	I0510 17:52:43.879293 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem
	I0510 17:52:43.879328 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem (1675 bytes)
	I0510 17:52:43.879447 1062960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem org=jenkins.newest-cni-173135 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-173135]
	I0510 17:52:44.399990 1062960 provision.go:177] copyRemoteCerts
	I0510 17:52:44.400060 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 17:52:44.400097 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.417363 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:44.509498 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 17:52:44.533816 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0510 17:52:44.556664 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 17:52:44.579844 1062960 provision.go:87] duration metric: took 719.387116ms to configureAuth
	I0510 17:52:44.579874 1062960 ubuntu.go:193] setting minikube options for container-runtime
	I0510 17:52:44.580082 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:44.580225 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.597779 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:44.597997 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:44.598015 1062960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 17:52:44.861571 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 17:52:44.861603 1062960 machine.go:96] duration metric: took 4.435849898s to provisionDockerMachine
	I0510 17:52:44.861615 1062960 start.go:293] postStartSetup for "newest-cni-173135" (driver="docker")
	I0510 17:52:44.861633 1062960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 17:52:44.861696 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 17:52:44.861741 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.880393 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:44.968863 1062960 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 17:52:44.972444 1062960 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0510 17:52:44.972471 1062960 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0510 17:52:44.972479 1062960 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0510 17:52:44.972486 1062960 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0510 17:52:44.972499 1062960 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/addons for local assets ...
	I0510 17:52:44.972551 1062960 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/files for local assets ...
	I0510 17:52:44.972632 1062960 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem -> 7298152.pem in /etc/ssl/certs
	I0510 17:52:44.972715 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 17:52:44.981250 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem --> /etc/ssl/certs/7298152.pem (1708 bytes)
	I0510 17:52:45.004513 1062960 start.go:296] duration metric: took 142.88043ms for postStartSetup
	I0510 17:52:45.004636 1062960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 17:52:45.004699 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.022563 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.108643 1062960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0510 17:52:45.113165 1062960 fix.go:56] duration metric: took 4.992266927s for fixHost
	I0510 17:52:45.113190 1062960 start.go:83] releasing machines lock for "newest-cni-173135", held for 4.992317581s
	I0510 17:52:45.113270 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:45.130656 1062960 ssh_runner.go:195] Run: cat /version.json
	I0510 17:52:45.130728 1062960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 17:52:45.130785 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.130732 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.149250 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.153557 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.235894 1062960 ssh_runner.go:195] Run: systemctl --version
	I0510 17:52:45.328928 1062960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 17:52:45.467882 1062960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0510 17:52:45.472485 1062960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 17:52:45.480914 1062960 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0510 17:52:45.480989 1062960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 17:52:45.489392 1062960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0510 17:52:45.489423 1062960 start.go:495] detecting cgroup driver to use...
	I0510 17:52:45.489464 1062960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0510 17:52:45.489535 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 17:52:45.501274 1062960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 17:52:45.512452 1062960 docker.go:225] disabling cri-docker service (if available) ...
	I0510 17:52:45.512528 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 17:52:45.524828 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 17:52:45.535636 1062960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 17:52:45.618303 1062960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 17:52:45.695586 1062960 docker.go:241] disabling docker service ...
	I0510 17:52:45.695664 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 17:52:45.707968 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 17:52:45.719029 1062960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 17:52:45.800197 1062960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 17:52:45.887455 1062960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 17:52:45.898860 1062960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 17:52:45.914760 1062960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 17:52:45.914818 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.924202 1062960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 17:52:45.924260 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.933839 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.944911 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.954202 1062960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 17:52:45.962950 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.972583 1062960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.981599 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.991016 1062960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 17:52:45.999017 1062960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 17:52:46.007316 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:46.090516 1062960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 17:52:46.208208 1062960 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 17:52:46.208290 1062960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 17:52:46.212169 1062960 start.go:563] Will wait 60s for crictl version
	I0510 17:52:46.212233 1062960 ssh_runner.go:195] Run: which crictl
	I0510 17:52:46.215714 1062960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 17:52:46.250179 1062960 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0510 17:52:46.250256 1062960 ssh_runner.go:195] Run: crio --version
	I0510 17:52:46.286288 1062960 ssh_runner.go:195] Run: crio --version
	I0510 17:52:46.324763 1062960 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.24.6 ...
	I0510 17:52:46.326001 1062960 cli_runner.go:164] Run: docker network inspect newest-cni-173135 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0510 17:52:46.342321 1062960 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0510 17:52:46.346220 1062960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:52:46.358987 1062960 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0510 17:52:46.360438 1062960 kubeadm.go:875] updating cluster {Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 17:52:46.360585 1062960 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 17:52:46.360654 1062960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:52:46.402300 1062960 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 17:52:46.402322 1062960 crio.go:433] Images already preloaded, skipping extraction
	I0510 17:52:46.402371 1062960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:52:46.438279 1062960 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 17:52:46.438310 1062960 cache_images.go:84] Images are preloaded, skipping loading
	I0510 17:52:46.438321 1062960 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.33.0 crio true true} ...
	I0510 17:52:46.438480 1062960 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-173135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 17:52:46.438582 1062960 ssh_runner.go:195] Run: crio config
	I0510 17:52:46.483257 1062960 cni.go:84] Creating CNI manager for ""
	I0510 17:52:46.483281 1062960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 17:52:46.483292 1062960 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0510 17:52:46.483315 1062960 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-173135 NodeName:newest-cni-173135 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 17:52:46.483479 1062960 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-173135"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 17:52:46.483553 1062960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 17:52:46.492414 1062960 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 17:52:46.492500 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 17:52:46.501119 1062960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0510 17:52:46.518140 1062960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 17:52:46.535112 1062960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0510 17:52:46.551871 1062960 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0510 17:52:46.555171 1062960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:52:46.565729 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:46.652845 1062960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:52:46.666063 1062960 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135 for IP: 192.168.94.2
	I0510 17:52:46.666087 1062960 certs.go:194] generating shared ca certs ...
	I0510 17:52:46.666108 1062960 certs.go:226] acquiring lock for ca certs: {Name:mk27922925b9822e089551ad68cc2984cd622bc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:46.666267 1062960 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key
	I0510 17:52:46.666346 1062960 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key
	I0510 17:52:46.666367 1062960 certs.go:256] generating profile certs ...
	I0510 17:52:46.666488 1062960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/client.key
	I0510 17:52:46.666575 1062960 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.key.eac5560e
	I0510 17:52:46.666638 1062960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.key
	I0510 17:52:46.666788 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815.pem (1338 bytes)
	W0510 17:52:46.666836 1062960 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815_empty.pem, impossibly tiny 0 bytes
	I0510 17:52:46.666855 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 17:52:46.666891 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem (1078 bytes)
	I0510 17:52:46.666924 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem (1123 bytes)
	I0510 17:52:46.666954 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem (1675 bytes)
	I0510 17:52:46.667014 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem (1708 bytes)
	I0510 17:52:46.667736 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 17:52:46.694046 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0510 17:52:46.720567 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 17:52:46.750803 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0510 17:52:46.783126 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0510 17:52:46.861172 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 17:52:46.886437 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 17:52:46.909743 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 17:52:46.932746 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem --> /usr/share/ca-certificates/7298152.pem (1708 bytes)
	I0510 17:52:46.955864 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 17:52:46.978875 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815.pem --> /usr/share/ca-certificates/729815.pem (1338 bytes)
	I0510 17:52:47.001846 1062960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 17:52:47.018936 1062960 ssh_runner.go:195] Run: openssl version
	I0510 17:52:47.024207 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 17:52:47.033345 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.036756 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 16:54 /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.036814 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.043306 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 17:52:47.051810 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/729815.pem && ln -fs /usr/share/ca-certificates/729815.pem /etc/ssl/certs/729815.pem"
	I0510 17:52:47.060972 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.064315 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 17:06 /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.064361 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.070986 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/729815.pem /etc/ssl/certs/51391683.0"
	I0510 17:52:47.079952 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7298152.pem && ln -fs /usr/share/ca-certificates/7298152.pem /etc/ssl/certs/7298152.pem"
	I0510 17:52:47.089676 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.093441 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 17:06 /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.093504 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.100198 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7298152.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 17:52:47.108827 1062960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 17:52:47.112497 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 17:52:47.119081 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 17:52:47.125525 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 17:52:47.131948 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 17:52:47.138247 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 17:52:47.145052 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 17:52:47.152189 1062960 kubeadm.go:392] StartCluster: {Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:47.152299 1062960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 17:52:47.152356 1062960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 17:52:47.190954 1062960 cri.go:89] found id: ""
	I0510 17:52:47.191057 1062960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 17:52:47.200662 1062960 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0510 17:52:47.200683 1062960 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0510 17:52:47.200729 1062960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0510 17:52:47.210371 1062960 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0510 17:52:47.211583 1062960 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-173135" does not appear in /home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:47.212205 1062960 kubeconfig.go:62] /home/jenkins/minikube-integration/20720-722920/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-173135" cluster setting kubeconfig missing "newest-cni-173135" context setting]
	I0510 17:52:47.213167 1062960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/kubeconfig: {Name:mk9fb87a04495b85d7d2d831cf7e181b64e065fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.215451 1062960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0510 17:52:47.225765 1062960 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0510 17:52:47.225809 1062960 kubeadm.go:593] duration metric: took 25.118512ms to restartPrimaryControlPlane
	I0510 17:52:47.225823 1062960 kubeadm.go:394] duration metric: took 73.645898ms to StartCluster
	I0510 17:52:47.225844 1062960 settings.go:142] acquiring lock: {Name:mkb5ef074e3901ac961cf1a29314fa6c725c1890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.225925 1062960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:47.227600 1062960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/kubeconfig: {Name:mk9fb87a04495b85d7d2d831cf7e181b64e065fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.227929 1062960 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 17:52:47.228146 1062960 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0510 17:52:47.228262 1062960 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-173135"
	I0510 17:52:47.228286 1062960 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-173135"
	W0510 17:52:47.228300 1062960 addons.go:247] addon storage-provisioner should already be in state true
	I0510 17:52:47.228322 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:47.228340 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.228374 1062960 addons.go:69] Setting default-storageclass=true in profile "newest-cni-173135"
	I0510 17:52:47.228389 1062960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-173135"
	I0510 17:52:47.228696 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.228794 1062960 addons.go:69] Setting metrics-server=true in profile "newest-cni-173135"
	I0510 17:52:47.228819 1062960 addons.go:238] Setting addon metrics-server=true in "newest-cni-173135"
	W0510 17:52:47.228830 1062960 addons.go:247] addon metrics-server should already be in state true
	I0510 17:52:47.228871 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.228905 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.229098 1062960 addons.go:69] Setting dashboard=true in profile "newest-cni-173135"
	I0510 17:52:47.229122 1062960 addons.go:238] Setting addon dashboard=true in "newest-cni-173135"
	W0510 17:52:47.229131 1062960 addons.go:247] addon dashboard should already be in state true
	I0510 17:52:47.229160 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.229350 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.229636 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.231952 1062960 out.go:177] * Verifying Kubernetes components...
	I0510 17:52:47.233708 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:47.257836 1062960 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0510 17:52:47.259786 1062960 addons.go:238] Setting addon default-storageclass=true in "newest-cni-173135"
	W0510 17:52:47.259808 1062960 addons.go:247] addon default-storageclass should already be in state true
	I0510 17:52:47.259842 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.260502 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0510 17:52:47.260520 1062960 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0510 17:52:47.260587 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.260894 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.269485 1062960 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0510 17:52:47.270561 1062960 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0510 17:52:47.271826 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0510 17:52:47.271848 1062960 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0510 17:52:47.271913 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.273848 1062960 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 17:52:47.275490 1062960 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:52:47.275521 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0510 17:52:47.275721 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.287652 1062960 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:47.287676 1062960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0510 17:52:47.287737 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.300295 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.308088 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.314958 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.317183 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.570630 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:47.644300 1062960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:52:47.648111 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0510 17:52:47.648144 1062960 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0510 17:52:47.745020 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0510 17:52:47.745054 1062960 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0510 17:52:47.746206 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:52:47.753235 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0510 17:52:47.753267 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0510 17:52:47.852275 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0510 17:52:47.852309 1062960 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0510 17:52:47.854261 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0510 17:52:47.854291 1062960 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0510 17:52:47.957529 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0510 17:52:47.957561 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0510 17:52:47.962427 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:52:47.962453 1062960 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0510 17:52:47.967141 1062960 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0510 17:52:47.967185 1062960 retry.go:31] will retry after 329.411117ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0510 17:52:47.967271 1062960 api_server.go:52] waiting for apiserver process to appear ...
	I0510 17:52:47.967381 1062960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 17:52:48.055318 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0510 17:52:48.055400 1062960 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0510 17:52:48.060787 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:52:48.149914 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0510 17:52:48.149947 1062960 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0510 17:52:48.175035 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0510 17:52:48.175070 1062960 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0510 17:52:48.263718 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0510 17:52:48.263750 1062960 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0510 17:52:48.282195 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 17:52:48.282227 1062960 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0510 17:52:48.297636 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:48.359369 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 17:52:52.345196 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.598944537s)
	I0510 17:52:52.345534 1062960 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.378119806s)
	I0510 17:52:52.345610 1062960 api_server.go:72] duration metric: took 5.117639828s to wait for apiserver process to appear ...
	I0510 17:52:52.345622 1062960 api_server.go:88] waiting for apiserver healthz status ...
	I0510 17:52:52.345683 1062960 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0510 17:52:52.350659 1062960 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0510 17:52:52.350693 1062960 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0510 17:52:52.462305 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.401465129s)
	I0510 17:52:52.462425 1062960 addons.go:479] Verifying addon metrics-server=true in "newest-cni-173135"
	I0510 17:52:52.462366 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.164694895s)
	I0510 17:52:52.558877 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.199364581s)
	I0510 17:52:52.560719 1062960 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-173135 addons enable metrics-server
	
	I0510 17:52:52.562364 1062960 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0510 17:52:52.563698 1062960 addons.go:514] duration metric: took 5.33556927s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0510 17:52:52.846151 1062960 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0510 17:52:52.850590 1062960 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0510 17:52:52.851935 1062960 api_server.go:141] control plane version: v1.33.0
	I0510 17:52:52.851968 1062960 api_server.go:131] duration metric: took 506.335848ms to wait for apiserver health ...
	I0510 17:52:52.851979 1062960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 17:52:52.855964 1062960 system_pods.go:59] 9 kube-system pods found
	I0510 17:52:52.856013 1062960 system_pods.go:61] "coredns-674b8bbfcf-l2m27" [11b63e72-35af-4a70-a7d3-b11e18104e2e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856039 1062960 system_pods.go:61] "etcd-newest-cni-173135" [60c35044-778d-45d4-8d96-e58efbd9b54b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 17:52:52.856062 1062960 system_pods.go:61] "kindnet-5nzlt" [9158a53c-5cd1-426c-a255-37618e292899] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0510 17:52:52.856073 1062960 system_pods.go:61] "kube-apiserver-newest-cni-173135" [790eeefa-f593-4148-b5f3-43bf9807166f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 17:52:52.856085 1062960 system_pods.go:61] "kube-controller-manager-newest-cni-173135" [75bdb232-66d8-442a-8566-34a3d4674876] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 17:52:52.856096 1062960 system_pods.go:61] "kube-proxy-v2tt7" [e502d755-4ecb-4567-9259-547f7c063830] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0510 17:52:52.856108 1062960 system_pods.go:61] "kube-scheduler-newest-cni-173135" [8bfc0953-197d-4185-b2e7-6e1a2d97a8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 17:52:52.856117 1062960 system_pods.go:61] "metrics-server-f79f97bbb-z4g7z" [a6bcfd5e-6f32-43ef-a6e7-336c90faf9ff] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856125 1062960 system_pods.go:61] "storage-provisioner" [effda141-cd8d-4f87-97a1-9166c59e1de0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856132 1062960 system_pods.go:74] duration metric: took 4.146105ms to wait for pod list to return data ...
	I0510 17:52:52.856143 1062960 default_sa.go:34] waiting for default service account to be created ...
	I0510 17:52:52.858633 1062960 default_sa.go:45] found service account: "default"
	I0510 17:52:52.858658 1062960 default_sa.go:55] duration metric: took 2.507165ms for default service account to be created ...
	I0510 17:52:52.858670 1062960 kubeadm.go:578] duration metric: took 5.630701473s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0510 17:52:52.858701 1062960 node_conditions.go:102] verifying NodePressure condition ...
	I0510 17:52:52.861375 1062960 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0510 17:52:52.861398 1062960 node_conditions.go:123] node cpu capacity is 8
	I0510 17:52:52.861411 1062960 node_conditions.go:105] duration metric: took 2.704535ms to run NodePressure ...
	I0510 17:52:52.861422 1062960 start.go:241] waiting for startup goroutines ...
	I0510 17:52:52.861431 1062960 start.go:246] waiting for cluster config update ...
	I0510 17:52:52.861444 1062960 start.go:255] writing updated cluster config ...
	I0510 17:52:52.861692 1062960 ssh_runner.go:195] Run: rm -f paused
	I0510 17:52:52.918445 1062960 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 17:52:52.920711 1062960 out.go:177] * Done! kubectl is now configured to use "newest-cni-173135" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 10 18:00:05 old-k8s-version-697935 crio[652]: time="2025-05-10 18:00:05.858827143Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=0e0d8e0f-338c-4898-92e0-2ed93ca10f48 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:00:05 old-k8s-version-697935 crio[652]: time="2025-05-10 18:00:05.859147853Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4d925a5f-d031-4f65-a35e-f1f7e4235f1e name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:00:05 old-k8s-version-697935 crio[652]: time="2025-05-10 18:00:05.859277657Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=0e0d8e0f-338c-4898-92e0-2ed93ca10f48 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:00:05 old-k8s-version-697935 crio[652]: time="2025-05-10 18:00:05.859767468Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ffc289e4-c157-4e98-bc44-a83035ff835e name=/runtime.v1alpha2.ImageService/PullImage
	May 10 18:00:05 old-k8s-version-697935 crio[652]: time="2025-05-10 18:00:05.860932071Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 18:00:17 old-k8s-version-697935 crio[652]: time="2025-05-10 18:00:17.858798115Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=82d71da7-999b-4393-ab1b-e37f7cba4c3d name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:00:17 old-k8s-version-697935 crio[652]: time="2025-05-10 18:00:17.859092567Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=82d71da7-999b-4393-ab1b-e37f7cba4c3d name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:00:29 old-k8s-version-697935 crio[652]: time="2025-05-10 18:00:29.859017602Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=f3fafa01-9aa3-4d27-9eeb-02008f1e0f6f name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:00:29 old-k8s-version-697935 crio[652]: time="2025-05-10 18:00:29.859278820Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=f3fafa01-9aa3-4d27-9eeb-02008f1e0f6f name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:00:44 old-k8s-version-697935 crio[652]: time="2025-05-10 18:00:44.858824464Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=4dd8c848-dae3-4a31-ab82-34c2ceef3b34 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:00:44 old-k8s-version-697935 crio[652]: time="2025-05-10 18:00:44.859077945Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4dd8c848-dae3-4a31-ab82-34c2ceef3b34 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:00:56 old-k8s-version-697935 crio[652]: time="2025-05-10 18:00:56.858894622Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=efb71599-7376-43f7-b1ad-195659b3da19 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:00:56 old-k8s-version-697935 crio[652]: time="2025-05-10 18:00:56.859126658Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=efb71599-7376-43f7-b1ad-195659b3da19 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:01:07 old-k8s-version-697935 crio[652]: time="2025-05-10 18:01:07.858826534Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0e896ddb-b85f-42c1-9695-fee3b0818f60 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:01:07 old-k8s-version-697935 crio[652]: time="2025-05-10 18:01:07.859140927Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0e896ddb-b85f-42c1-9695-fee3b0818f60 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:01:09 old-k8s-version-697935 crio[652]: time="2025-05-10 18:01:09.826150480Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=2d884027-0762-4d77-adac-20d1a1168847 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:01:09 old-k8s-version-697935 crio[652]: time="2025-05-10 18:01:09.826413145Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2d884027-0762-4d77-adac-20d1a1168847 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:01:17 old-k8s-version-697935 crio[652]: time="2025-05-10 18:01:17.859025495Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=97e10bfa-f34a-4f3f-88f5-cad65852e5c8 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:01:17 old-k8s-version-697935 crio[652]: time="2025-05-10 18:01:17.859369948Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=97e10bfa-f34a-4f3f-88f5-cad65852e5c8 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:01:22 old-k8s-version-697935 crio[652]: time="2025-05-10 18:01:22.859069532Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=105a97c2-b9cc-4347-a2c3-12ebb6cc9f10 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:01:22 old-k8s-version-697935 crio[652]: time="2025-05-10 18:01:22.859290636Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=105a97c2-b9cc-4347-a2c3-12ebb6cc9f10 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:01:32 old-k8s-version-697935 crio[652]: time="2025-05-10 18:01:32.858757773Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b509d863-a9ee-46f3-b7bb-39ece455150b name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:01:32 old-k8s-version-697935 crio[652]: time="2025-05-10 18:01:32.859103789Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b509d863-a9ee-46f3-b7bb-39ece455150b name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:01:34 old-k8s-version-697935 crio[652]: time="2025-05-10 18:01:34.858672321Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=fb48c37f-39e2-4c9d-b5c1-d38f676d522d name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:01:34 old-k8s-version-697935 crio[652]: time="2025-05-10 18:01:34.858896753Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=fb48c37f-39e2-4c9d-b5c1-d38f676d522d name=/runtime.v1alpha2.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	1a6852b4c023d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      4 minutes ago       Exited              dashboard-metrics-scraper   6                   9084dcec4b23c       dashboard-metrics-scraper-8d5bb5db8-vt5c5
	7ff368bbd66a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Running             storage-provisioner         1                   d496b56233a1d       storage-provisioner
	c96e9a182b388       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Running             busybox                     0                   12e6a90e06c7d       busybox
	85ae431b96297       docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495    10 minutes ago      Running             kindnet-cni                 0                   2e251720776d8       kindnet-n9r85
	51486f3a113ad       bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16                                      10 minutes ago      Running             coredns                     0                   f3e5024271b0b       coredns-74ff55c5b-c9gkr
	53760e48d7e9d       10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc                                      10 minutes ago      Running             kube-proxy                  0                   851b838d18916       kube-proxy-8tdw4
	592514d263d59       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner         0                   d496b56233a1d       storage-provisioner
	162ead39b3bd0       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                      10 minutes ago      Running             etcd                        0                   9b156512e5596       etcd-old-k8s-version-697935
	b8e9e23c661a6       3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899                                      10 minutes ago      Running             kube-scheduler              0                   d961a85ac011e       kube-scheduler-old-k8s-version-697935
	93b305fd7c4bf       b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080                                      10 minutes ago      Running             kube-controller-manager     0                   5845887f48634       kube-controller-manager-old-k8s-version-697935
	b74f559337208       ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99                                      10 minutes ago      Running             kube-apiserver              0                   05cdd901e82f2       kube-apiserver-old-k8s-version-697935
	
	
	==> coredns [51486f3a113adf2f4be53c43f2837f083c5b8bfaf0db20ec166be96f0b9f48d8] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:56662 - 46607 "HINFO IN 515717139491869813.6170506681979060124. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.053337476s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:35492 - 59128 "HINFO IN 2607771096306316054.5838860855392937212. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.065924808s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0510 17:51:47.803202       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-05-10 17:51:17.802226451 +0000 UTC m=+0.202082203) (total time: 30.000896115s):
	Trace[2019727887]: [30.000896115s] [30.000896115s] END
	E0510 17:51:47.803228       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0510 17:51:47.803354       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-05-10 17:51:17.802387809 +0000 UTC m=+0.202243559) (total time: 30.000930239s):
	Trace[939984059]: [30.000930239s] [30.000930239s] END
	E0510 17:51:47.803371       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0510 17:51:47.803518       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-05-10 17:51:17.802529668 +0000 UTC m=+0.202385412) (total time: 30.000957284s):
	Trace[911902081]: [30.000957284s] [30.000957284s] END
	E0510 17:51:47.803530       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-697935
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-697935
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=old-k8s-version-697935
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T17_48_58_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 17:48:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-697935
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 18:01:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 17:56:47 +0000   Sat, 10 May 2025 17:48:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 17:56:47 +0000   Sat, 10 May 2025 17:48:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 17:56:47 +0000   Sat, 10 May 2025 17:48:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 17:56:47 +0000   Sat, 10 May 2025 17:49:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-697935
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	System Info:
	  Machine ID:                 eec0bda788b749f7970518dbe01a5319
	  System UUID:                8baa3264-9de9-4216-a70b-20564168beb1
	  Boot ID:                    cf43504f-fb83-4d4b-9ff6-27d975437043
	  Kernel Version:             5.15.0-1081-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-74ff55c5b-c9gkr                           100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-old-k8s-version-697935                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-n9r85                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-old-k8s-version-697935             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-old-k8s-version-697935    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-8tdw4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-old-k8s-version-697935             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-9975d5f86-82bt9                    100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-vt5c5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-6bj6d               0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node old-k8s-version-697935 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node old-k8s-version-697935 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node old-k8s-version-697935 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                12m                kubelet     Node old-k8s-version-697935 status is now: NodeReady
	  Normal  Starting                 10m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet     Node old-k8s-version-697935 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet     Node old-k8s-version-697935 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet     Node old-k8s-version-697935 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +1.019813] net_ratelimit: 3 callbacks suppressed
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000003] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000002] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +4.095573] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000007] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000001] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000002] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +3.075626] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000001] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +0.000001] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000002] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +1.019906] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000006] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	
	
	==> etcd [162ead39b3bd07ba0aad4c32cd0b64430e21f272ad99288e7abb418c3024e004] <==
	2025-05-10 17:58:04.709258 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 17:58:14.709299 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 17:58:24.709277 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 17:58:34.709250 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 17:58:44.709291 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 17:58:54.709281 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 17:59:04.709228 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 17:59:14.709324 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 17:59:24.709228 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 17:59:34.709232 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 17:59:44.709303 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 17:59:54.709336 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:00:04.709275 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:00:14.709288 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:00:24.709226 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:00:34.709294 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:00:44.709303 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:00:54.709247 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:01:04.709247 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:01:12.389904 I | mvcc: store.index: compact 1031
	2025-05-10 18:01:12.406117 I | mvcc: finished scheduled compaction at 1031 (took 15.905087ms)
	2025-05-10 18:01:14.709299 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:01:24.709249 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:01:34.709315 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:01:44.709547 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 18:01:46 up  3:44,  0 users,  load average: 0.51, 1.06, 3.24
	Linux old-k8s-version-697935 5.15.0-1081-gcp #90~20.04.1-Ubuntu SMP Fri Apr 4 18:55:17 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [85ae431b96297b609864888406c19bb8709aa34cca8c804fe0d49328d5de00b5] <==
	I0510 17:59:41.751692       1 main.go:301] handling current node
	I0510 17:59:51.747551       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 17:59:51.747583       1 main.go:301] handling current node
	I0510 18:00:01.748595       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:00:01.748629       1 main.go:301] handling current node
	I0510 18:00:11.753014       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:00:11.753049       1 main.go:301] handling current node
	I0510 18:00:21.744564       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:00:21.744597       1 main.go:301] handling current node
	I0510 18:00:31.747539       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:00:31.747580       1 main.go:301] handling current node
	I0510 18:00:41.744275       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:00:41.744324       1 main.go:301] handling current node
	I0510 18:00:51.744199       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:00:51.744249       1 main.go:301] handling current node
	I0510 18:01:01.752780       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:01:01.752811       1 main.go:301] handling current node
	I0510 18:01:11.751507       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:01:11.751540       1 main.go:301] handling current node
	I0510 18:01:21.743940       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:01:21.743973       1 main.go:301] handling current node
	I0510 18:01:31.744951       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:01:31.745000       1 main.go:301] handling current node
	I0510 18:01:41.751517       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:01:41.751559       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b74f559337208741c7d3afe5075d361e32a449f67ebc91a2c4249f7184a95bef] <==
	I0510 17:58:25.153314       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0510 17:58:25.153324       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0510 17:59:02.968890       1 client.go:360] parsed scheme: "passthrough"
	I0510 17:59:02.968931       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0510 17:59:02.968938       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0510 17:59:17.429062       1 handler_proxy.go:102] no RequestInfo found in the context
	E0510 17:59:17.429144       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0510 17:59:17.429152       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 17:59:34.673509       1 client.go:360] parsed scheme: "passthrough"
	I0510 17:59:34.673551       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0510 17:59:34.673558       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0510 18:00:17.032999       1 client.go:360] parsed scheme: "passthrough"
	I0510 18:00:17.033045       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0510 18:00:17.033053       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0510 18:00:57.201019       1 client.go:360] parsed scheme: "passthrough"
	I0510 18:00:57.201059       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0510 18:00:57.201067       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0510 18:01:17.430240       1 handler_proxy.go:102] no RequestInfo found in the context
	E0510 18:01:17.430316       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0510 18:01:17.430325       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 18:01:35.935967       1 client.go:360] parsed scheme: "passthrough"
	I0510 18:01:35.936011       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0510 18:01:35.936018       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [93b305fd7c4bf90549012111c5bf582d7bea58f61b982b0ea0ab95b4603c5ab3] <==
	I0510 17:57:32.652921       1 request.go:655] Throttling request took 1.048396815s, request: GET:https://192.168.103.2:8443/apis/events.k8s.io/v1?timeout=32s
	W0510 17:57:33.504049       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0510 17:57:40.864099       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0510 17:58:05.154185       1 request.go:655] Throttling request took 1.048741296s, request: GET:https://192.168.103.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0510 17:58:06.005262       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0510 17:58:11.365624       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0510 17:58:37.655586       1 request.go:655] Throttling request took 1.048685817s, request: GET:https://192.168.103.2:8443/apis/apps/v1?timeout=32s
	W0510 17:58:38.506868       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0510 17:58:41.867603       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0510 17:59:10.157215       1 request.go:655] Throttling request took 1.048643739s, request: GET:https://192.168.103.2:8443/apis/storage.k8s.io/v1?timeout=32s
	W0510 17:59:11.008147       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0510 17:59:12.369081       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0510 17:59:42.658267       1 request.go:655] Throttling request took 1.048334974s, request: GET:https://192.168.103.2:8443/apis/batch/v1?timeout=32s
	E0510 17:59:42.870502       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0510 17:59:43.509101       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0510 18:00:13.372237       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0510 18:00:15.159462       1 request.go:655] Throttling request took 1.048746446s, request: GET:https://192.168.103.2:8443/apis/apiregistration.k8s.io/v1beta1?timeout=32s
	W0510 18:00:16.010316       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0510 18:00:43.873079       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0510 18:00:47.660465       1 request.go:655] Throttling request took 1.048323878s, request: GET:https://192.168.103.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s
	W0510 18:00:48.511441       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0510 18:01:14.374825       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0510 18:01:20.161788       1 request.go:655] Throttling request took 1.048703549s, request: GET:https://192.168.103.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0510 18:01:21.012658       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0510 18:01:44.876662       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [53760e48d7e9da22f6aa6e6dbe00df0e633cfea641309fdc64339e399ea491e5] <==
	I0510 17:49:14.175845       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0510 17:49:14.175928       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0510 17:49:14.268358       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0510 17:49:14.268473       1 server_others.go:185] Using iptables Proxier.
	I0510 17:49:14.268779       1 server.go:650] Version: v1.20.0
	I0510 17:49:14.269335       1 config.go:315] Starting service config controller
	I0510 17:49:14.269352       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0510 17:49:14.269435       1 config.go:224] Starting endpoint slice config controller
	I0510 17:49:14.269566       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0510 17:49:14.369653       1 shared_informer.go:247] Caches are synced for service config 
	I0510 17:49:14.374408       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0510 17:51:17.944562       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0510 17:51:17.944752       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0510 17:51:17.978812       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0510 17:51:17.978987       1 server_others.go:185] Using iptables Proxier.
	I0510 17:51:17.979335       1 server.go:650] Version: v1.20.0
	I0510 17:51:17.980427       1 config.go:315] Starting service config controller
	I0510 17:51:17.980485       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0510 17:51:17.980553       1 config.go:224] Starting endpoint slice config controller
	I0510 17:51:17.980584       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0510 17:51:18.082130       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0510 17:51:18.082297       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [b8e9e23c661a6fa84bb7f1f46dd1e3b80bd48542b9bbbfde79c96f06e70425b5] <==
	E0510 17:48:54.756903       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0510 17:48:54.757151       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0510 17:48:54.757754       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0510 17:48:55.574155       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0510 17:48:55.604883       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0510 17:48:55.618505       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0510 17:48:55.676765       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0510 17:48:55.679964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0510 17:48:55.744488       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0510 17:48:55.745288       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0510 17:48:55.800324       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0510 17:48:55.846155       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0510 17:48:58.250713       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0510 17:51:11.495408       1 serving.go:331] Generated self-signed cert in-memory
	I0510 17:51:16.763501       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0510 17:51:16.763531       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0510 17:51:16.763561       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0510 17:51:16.763566       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0510 17:51:16.763593       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0510 17:51:16.763597       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0510 17:51:16.763852       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0510 17:51:16.763933       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0510 17:51:16.872560       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	I0510 17:51:16.872701       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
	I0510 17:51:16.965367       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	May 10 18:00:38 old-k8s-version-697935 kubelet[1198]: E0510 18:00:38.858863    1198 pod_workers.go:191] Error syncing pod 0c45a349-f180-4fb6-b199-62c35f286b01 ("dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"
	May 10 18:00:44 old-k8s-version-697935 kubelet[1198]: E0510 18:00:44.859342    1198 pod_workers.go:191] Error syncing pod a5ed34c7-3a50-499a-99db-059e22fe8837 ("metrics-server-9975d5f86-82bt9_kube-system(a5ed34c7-3a50-499a-99db-059e22fe8837)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 10 18:00:51 old-k8s-version-697935 kubelet[1198]: I0510 18:00:51.858409    1198 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1a6852b4c023d79f06ee6e6cce849a2868ff019a559c1d1a8d21f9f41a1e3e56
	May 10 18:00:51 old-k8s-version-697935 kubelet[1198]: E0510 18:00:51.858813    1198 pod_workers.go:191] Error syncing pod 0c45a349-f180-4fb6-b199-62c35f286b01 ("dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"
	May 10 18:00:56 old-k8s-version-697935 kubelet[1198]: E0510 18:00:56.859439    1198 pod_workers.go:191] Error syncing pod a5ed34c7-3a50-499a-99db-059e22fe8837 ("metrics-server-9975d5f86-82bt9_kube-system(a5ed34c7-3a50-499a-99db-059e22fe8837)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 10 18:01:02 old-k8s-version-697935 kubelet[1198]: I0510 18:01:02.858389    1198 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1a6852b4c023d79f06ee6e6cce849a2868ff019a559c1d1a8d21f9f41a1e3e56
	May 10 18:01:02 old-k8s-version-697935 kubelet[1198]: E0510 18:01:02.858702    1198 pod_workers.go:191] Error syncing pod 0c45a349-f180-4fb6-b199-62c35f286b01 ("dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"
	May 10 18:01:06 old-k8s-version-697935 kubelet[1198]: E0510 18:01:06.959133    1198 remote_image.go:113] PullImage "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" from image service failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	May 10 18:01:06 old-k8s-version-697935 kubelet[1198]: E0510 18:01:06.959197    1198 kuberuntime_image.go:51] Pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	May 10 18:01:06 old-k8s-version-697935 kubelet[1198]: E0510 18:01:06.959319    1198 kuberuntime_manager.go:829] container &Container{Name:kubernetes-dashboard,Image:docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Command:[],Args:[--namespace=kubernetes-dashboard --enable-skip-login --disable-settings-authorizer],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kubernetes-dashboard-token-lrxrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 9090 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},}
,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kubernetes-dashboard-cd95d586-6bj6d_kubernetes-dashboard(8dfa2561-0fd4-4df5-93e1-f807fe41266a): ErrImagePull: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www
.docker.com/increase-rate-limit
	May 10 18:01:06 old-k8s-version-697935 kubelet[1198]: E0510 18:01:06.959353    1198 pod_workers.go:191] Error syncing pod 8dfa2561-0fd4-4df5-93e1-f807fe41266a ("kubernetes-dashboard-cd95d586-6bj6d_kubernetes-dashboard(8dfa2561-0fd4-4df5-93e1-f807fe41266a)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with ErrImagePull: "rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	May 10 18:01:07 old-k8s-version-697935 kubelet[1198]: E0510 18:01:07.859396    1198 pod_workers.go:191] Error syncing pod a5ed34c7-3a50-499a-99db-059e22fe8837 ("metrics-server-9975d5f86-82bt9_kube-system(a5ed34c7-3a50-499a-99db-059e22fe8837)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 10 18:01:09 old-k8s-version-697935 kubelet[1198]: E0510 18:01:09.859489    1198 container_manager_linux.go:533] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1, memory: /docker/eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1/system.slice/kubelet.service
	May 10 18:01:15 old-k8s-version-697935 kubelet[1198]: I0510 18:01:15.858503    1198 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1a6852b4c023d79f06ee6e6cce849a2868ff019a559c1d1a8d21f9f41a1e3e56
	May 10 18:01:15 old-k8s-version-697935 kubelet[1198]: E0510 18:01:15.858812    1198 pod_workers.go:191] Error syncing pod 0c45a349-f180-4fb6-b199-62c35f286b01 ("dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"
	May 10 18:01:17 old-k8s-version-697935 kubelet[1198]: E0510 18:01:17.859628    1198 pod_workers.go:191] Error syncing pod 8dfa2561-0fd4-4df5-93e1-f807fe41266a ("kubernetes-dashboard-cd95d586-6bj6d_kubernetes-dashboard(8dfa2561-0fd4-4df5-93e1-f807fe41266a)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with ImagePullBackOff: "Back-off pulling image \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 18:01:22 old-k8s-version-697935 kubelet[1198]: E0510 18:01:22.859608    1198 pod_workers.go:191] Error syncing pod a5ed34c7-3a50-499a-99db-059e22fe8837 ("metrics-server-9975d5f86-82bt9_kube-system(a5ed34c7-3a50-499a-99db-059e22fe8837)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 10 18:01:30 old-k8s-version-697935 kubelet[1198]: I0510 18:01:30.858317    1198 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1a6852b4c023d79f06ee6e6cce849a2868ff019a559c1d1a8d21f9f41a1e3e56
	May 10 18:01:30 old-k8s-version-697935 kubelet[1198]: E0510 18:01:30.858577    1198 pod_workers.go:191] Error syncing pod 0c45a349-f180-4fb6-b199-62c35f286b01 ("dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"
	May 10 18:01:32 old-k8s-version-697935 kubelet[1198]: E0510 18:01:32.859540    1198 pod_workers.go:191] Error syncing pod 8dfa2561-0fd4-4df5-93e1-f807fe41266a ("kubernetes-dashboard-cd95d586-6bj6d_kubernetes-dashboard(8dfa2561-0fd4-4df5-93e1-f807fe41266a)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with ImagePullBackOff: "Back-off pulling image \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 18:01:34 old-k8s-version-697935 kubelet[1198]: E0510 18:01:34.859177    1198 pod_workers.go:191] Error syncing pod a5ed34c7-3a50-499a-99db-059e22fe8837 ("metrics-server-9975d5f86-82bt9_kube-system(a5ed34c7-3a50-499a-99db-059e22fe8837)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 10 18:01:45 old-k8s-version-697935 kubelet[1198]: I0510 18:01:45.858527    1198 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1a6852b4c023d79f06ee6e6cce849a2868ff019a559c1d1a8d21f9f41a1e3e56
	May 10 18:01:45 old-k8s-version-697935 kubelet[1198]: E0510 18:01:45.858912    1198 pod_workers.go:191] Error syncing pod 0c45a349-f180-4fb6-b199-62c35f286b01 ("dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"
	May 10 18:01:45 old-k8s-version-697935 kubelet[1198]: E0510 18:01:45.859488    1198 pod_workers.go:191] Error syncing pod a5ed34c7-3a50-499a-99db-059e22fe8837 ("metrics-server-9975d5f86-82bt9_kube-system(a5ed34c7-3a50-499a-99db-059e22fe8837)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 10 18:01:45 old-k8s-version-697935 kubelet[1198]: E0510 18:01:45.859903    1198 pod_workers.go:191] Error syncing pod 8dfa2561-0fd4-4df5-93e1-f807fe41266a ("kubernetes-dashboard-cd95d586-6bj6d_kubernetes-dashboard(8dfa2561-0fd4-4df5-93e1-f807fe41266a)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with ImagePullBackOff: "Back-off pulling image \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	
	
	==> storage-provisioner [592514d263d59d3d9ed18bc51963bdd8df639168346e410f3424186ff72fc2c7] <==
	I0510 17:49:38.477578       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0510 17:49:38.489302       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0510 17:49:38.489346       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0510 17:49:38.502387       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0510 17:49:38.502694       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-697935_2320ec47-387a-4f04-b624-7088d7268c3d!
	I0510 17:49:38.502771       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"979ad0ce-9512-4fa4-88fe-42fe076ce8b8", APIVersion:"v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-697935_2320ec47-387a-4f04-b624-7088d7268c3d became leader
	I0510 17:49:38.603110       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-697935_2320ec47-387a-4f04-b624-7088d7268c3d!
	I0510 17:51:17.773328       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0510 17:51:47.776812       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7ff368bbd66a425974db0746bf2ed56b83b99a060d446470e7f4227046f9dc76] <==
	I0510 17:51:48.150087       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0510 17:51:48.161156       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0510 17:51:48.161214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0510 17:52:05.588741       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0510 17:52:05.588922       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-697935_18a6cee5-9c94-47f4-aec5-4878384bdfdf!
	I0510 17:52:05.588874       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"979ad0ce-9512-4fa4-88fe-42fe076ce8b8", APIVersion:"v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-697935_18a6cee5-9c94-47f4-aec5-4878384bdfdf became leader
	I0510 17:52:05.689934       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-697935_18a6cee5-9c94-47f4-aec5-4878384bdfdf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-697935 -n old-k8s-version-697935
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-697935 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-82bt9 kubernetes-dashboard-cd95d586-6bj6d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-697935 describe pod metrics-server-9975d5f86-82bt9 kubernetes-dashboard-cd95d586-6bj6d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-697935 describe pod metrics-server-9975d5f86-82bt9 kubernetes-dashboard-cd95d586-6bj6d: exit status 1 (64.658025ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-82bt9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-cd95d586-6bj6d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-697935 describe pod metrics-server-9975d5f86-82bt9 kubernetes-dashboard-cd95d586-6bj6d: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-cmxkz" [c123f562-4744-4a16-98d1-fce9d4f44d5c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-256321 -n embed-certs-256321
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-05-10 18:09:58.938460833 +0000 UTC m=+4538.620247958
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-256321 describe po kubernetes-dashboard-7779f9b69b-cmxkz -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context embed-certs-256321 describe po kubernetes-dashboard-7779f9b69b-cmxkz -n kubernetes-dashboard:
Name:             kubernetes-dashboard-7779f9b69b-cmxkz
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-256321/192.168.76.2
Start Time:       Sat, 10 May 2025 17:51:25 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=7779f9b69b
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-7779f9b69b
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6zlkm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-6zlkm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-cmxkz to embed-certs-256321
Warning  Failed     15m                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     13m (x4 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     13m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m30s (x47 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m50s (x50 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-256321 logs kubernetes-dashboard-7779f9b69b-cmxkz -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-256321 logs kubernetes-dashboard-7779f9b69b-cmxkz -n kubernetes-dashboard: exit status 1 (71.862778ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-7779f9b69b-cmxkz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context embed-certs-256321 logs kubernetes-dashboard-7779f9b69b-cmxkz -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-256321 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-256321
helpers_test.go:235: (dbg) docker inspect embed-certs-256321:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "63ef81e147639f8a2c8ea835891fc2be0a5e82d2e68596f6895399d4134dc3dc",
	        "Created": "2025-05-10T17:50:07.057229049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1049070,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-05-10T17:51:06.548958168Z",
	            "FinishedAt": "2025-05-10T17:51:04.564798034Z"
	        },
	        "Image": "sha256:e9e814e304601d171cd7a05fe946703c6fbd63c3e77415c5bcfe31c3cddbbe5f",
	        "ResolvConfPath": "/var/lib/docker/containers/63ef81e147639f8a2c8ea835891fc2be0a5e82d2e68596f6895399d4134dc3dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/63ef81e147639f8a2c8ea835891fc2be0a5e82d2e68596f6895399d4134dc3dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/63ef81e147639f8a2c8ea835891fc2be0a5e82d2e68596f6895399d4134dc3dc/hosts",
	        "LogPath": "/var/lib/docker/containers/63ef81e147639f8a2c8ea835891fc2be0a5e82d2e68596f6895399d4134dc3dc/63ef81e147639f8a2c8ea835891fc2be0a5e82d2e68596f6895399d4134dc3dc-json.log",
	        "Name": "/embed-certs-256321",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-256321:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-256321",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "63ef81e147639f8a2c8ea835891fc2be0a5e82d2e68596f6895399d4134dc3dc",
	                "LowerDir": "/var/lib/docker/overlay2/834ab38eb942c563cbabcd3318ac006a46874b5d728dc4c4bc5935dfdbab3d7a-init/diff:/var/lib/docker/overlay2/d562a19931b28d74981554e3e67ffc7804c8c483ec96f024e40ef2be1bf23f73/diff",
	                "MergedDir": "/var/lib/docker/overlay2/834ab38eb942c563cbabcd3318ac006a46874b5d728dc4c4bc5935dfdbab3d7a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/834ab38eb942c563cbabcd3318ac006a46874b5d728dc4c4bc5935dfdbab3d7a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/834ab38eb942c563cbabcd3318ac006a46874b5d728dc4c4bc5935dfdbab3d7a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-256321",
	                "Source": "/var/lib/docker/volumes/embed-certs-256321/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-256321",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-256321",
	                "name.minikube.sigs.k8s.io": "embed-certs-256321",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7feda4654c2d46844b5b20e13b89d18059ad4da9e0e19e5020bd5ddd9aec57d",
	            "SandboxKey": "/var/run/docker/netns/a7feda4654c2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-256321": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:2c:5a:2f:2d:3a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ba2829c5de699830ad18d5fc13a225f391a5c001fb69b6f025c5df9f94898875",
	                    "EndpointID": "29ac505c1766abb3203531b1448a2d4fcd2e3076f2ec8757e8216a52971e763a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-256321",
	                        "63ef81e14763"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256321 -n embed-certs-256321
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-256321 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-256321 logs -n 25: (1.170989408s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-256321                                  | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-697935             | old-k8s-version-697935       | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-697935                              | old-k8s-version-697935       | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 17:50 UTC |                     |
	|         | default-k8s-diff-port-676255                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-058078                  | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-676255       | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 17:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-256321                 | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 17:51 UTC | 10 May 25 17:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-256321                                  | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 17:51 UTC | 10 May 25 17:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 17:51 UTC | 10 May 25 17:51 UTC |
	|         | default-k8s-diff-port-676255                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| image   | no-preload-058078 image list                           | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| delete  | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| start   | -p newest-cni-173135 --memory=2200 --alsologtostderr   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-173135             | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-173135                  | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-173135 --memory=2200 --alsologtostderr   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-173135 image list                           | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| delete  | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:52:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:52:39.942859 1062960 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:52:39.943098 1062960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:39.943129 1062960 out.go:358] Setting ErrFile to fd 2...
	I0510 17:52:39.943146 1062960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:39.943562 1062960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:52:39.944604 1062960 out.go:352] Setting JSON to false
	I0510 17:52:39.945997 1062960 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12907,"bootTime":1746886653,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:52:39.946130 1062960 start.go:140] virtualization: kvm guest
	I0510 17:52:39.948309 1062960 out.go:177] * [newest-cni-173135] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:52:39.949674 1062960 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:52:39.949716 1062960 notify.go:220] Checking for updates...
	I0510 17:52:39.952354 1062960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:52:39.953722 1062960 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:39.955058 1062960 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 17:52:39.956484 1062960 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:52:39.957799 1062960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:52:39.959587 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:39.960145 1062960 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:52:39.985577 1062960 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 17:52:39.985704 1062960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:52:40.035501 1062960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-05-10 17:52:40.02617924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:52:40.035611 1062960 docker.go:318] overlay module found
	I0510 17:52:40.037784 1062960 out.go:177] * Using the docker driver based on existing profile
	I0510 17:52:40.039108 1062960 start.go:304] selected driver: docker
	I0510 17:52:40.039123 1062960 start.go:908] validating driver "docker" against &{Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:40.039239 1062960 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:52:40.040135 1062960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:52:40.092965 1062960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-05-10 17:52:40.084143213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:52:40.093291 1062960 start_flags.go:994] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0510 17:52:40.093320 1062960 cni.go:84] Creating CNI manager for ""
	I0510 17:52:40.093383 1062960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 17:52:40.093421 1062960 start.go:347] cluster config:
	{Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:40.096146 1062960 out.go:177] * Starting "newest-cni-173135" primary control-plane node in "newest-cni-173135" cluster
	I0510 17:52:40.097483 1062960 cache.go:121] Beginning downloading kic base image for docker with crio
	I0510 17:52:40.098838 1062960 out.go:177] * Pulling base image v0.0.46-1746731792-20718 ...
	I0510 17:52:40.100016 1062960 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 17:52:40.100054 1062960 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 17:52:40.100073 1062960 cache.go:56] Caching tarball of preloaded images
	I0510 17:52:40.100128 1062960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local docker daemon
	I0510 17:52:40.100157 1062960 preload.go:172] Found /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 17:52:40.100165 1062960 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 17:52:40.100261 1062960 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/config.json ...
	I0510 17:52:40.120688 1062960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local docker daemon, skipping pull
	I0510 17:52:40.120714 1062960 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 exists in daemon, skipping load
	I0510 17:52:40.120734 1062960 cache.go:230] Successfully downloaded all kic artifacts
	I0510 17:52:40.120784 1062960 start.go:360] acquireMachinesLock for newest-cni-173135: {Name:mk75975d6daf4063f8ba79544d03229010ceb1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:52:40.120860 1062960 start.go:364] duration metric: took 50.497µs to acquireMachinesLock for "newest-cni-173135"
	I0510 17:52:40.120885 1062960 start.go:96] Skipping create...Using existing machine configuration
	I0510 17:52:40.120892 1062960 fix.go:54] fixHost starting: 
	I0510 17:52:40.121107 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:40.139354 1062960 fix.go:112] recreateIfNeeded on newest-cni-173135: state=Stopped err=<nil>
	W0510 17:52:40.139386 1062960 fix.go:138] unexpected machine state, will restart: <nil>
	I0510 17:52:40.141294 1062960 out.go:177] * Restarting existing docker container for "newest-cni-173135" ...
	W0510 17:52:39.629875 1044308 pod_ready.go:104] pod "etcd-old-k8s-version-697935" is not "Ready", error: <nil>
	W0510 17:52:41.630228 1044308 pod_ready.go:104] pod "etcd-old-k8s-version-697935" is not "Ready", error: <nil>
	I0510 17:52:43.131391 1044308 pod_ready.go:94] pod "etcd-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.131443 1044308 pod_ready.go:86] duration metric: took 50.006172737s for pod "etcd-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.134286 1044308 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.138012 1044308 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.138036 1044308 pod_ready.go:86] duration metric: took 3.724234ms for pod "kube-apiserver-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.140268 1044308 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.143330 1044308 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.143350 1044308 pod_ready.go:86] duration metric: took 3.063093ms for pod "kube-controller-manager-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.145633 1044308 pod_ready.go:83] waiting for pod "kube-proxy-8tdw4" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.329167 1044308 pod_ready.go:94] pod "kube-proxy-8tdw4" is "Ready"
	I0510 17:52:43.329196 1044308 pod_ready.go:86] duration metric: took 183.5398ms for pod "kube-proxy-8tdw4" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.529673 1044308 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.929860 1044308 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.929890 1044308 pod_ready.go:86] duration metric: took 400.187942ms for pod "kube-scheduler-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.929904 1044308 pod_ready.go:40] duration metric: took 1m22.819056587s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 17:52:43.974390 1044308 start.go:607] kubectl: 1.33.0, cluster: 1.20.0 (minor skew: 13)
	I0510 17:52:43.975971 1044308 out.go:201] 
	W0510 17:52:43.977399 1044308 out.go:270] ! /usr/local/bin/kubectl is version 1.33.0, which may have incompatibilities with Kubernetes 1.20.0.
	I0510 17:52:43.978880 1044308 out.go:177]   - Want kubectl v1.20.0? Try 'minikube kubectl -- get pods -A'
	I0510 17:52:43.980215 1044308 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-697935" cluster and "default" namespace by default
	I0510 17:52:40.142629 1062960 cli_runner.go:164] Run: docker start newest-cni-173135
	I0510 17:52:40.387277 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:40.406155 1062960 kic.go:430] container "newest-cni-173135" state is running.
	I0510 17:52:40.406603 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:40.425434 1062960 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/config.json ...
	I0510 17:52:40.425733 1062960 machine.go:93] provisionDockerMachine start ...
	I0510 17:52:40.425813 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:40.446701 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:40.446942 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:40.446954 1062960 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 17:52:40.447629 1062960 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38662->127.0.0.1:33504: read: connection reset by peer
	I0510 17:52:43.567334 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-173135
	
	I0510 17:52:43.567369 1062960 ubuntu.go:169] provisioning hostname "newest-cni-173135"
	I0510 17:52:43.567474 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:43.585810 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:43.586092 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:43.586114 1062960 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-173135 && echo "newest-cni-173135" | sudo tee /etc/hostname
	I0510 17:52:43.720075 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-173135
	
	I0510 17:52:43.720180 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:43.738458 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:43.738683 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:43.738700 1062960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-173135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-173135/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-173135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 17:52:43.860357 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 17:52:43.860392 1062960 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20720-722920/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-722920/.minikube}
	I0510 17:52:43.860425 1062960 ubuntu.go:177] setting up certificates
	I0510 17:52:43.860438 1062960 provision.go:84] configureAuth start
	I0510 17:52:43.860501 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:43.878837 1062960 provision.go:143] copyHostCerts
	I0510 17:52:43.878913 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem, removing ...
	I0510 17:52:43.878934 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem
	I0510 17:52:43.879010 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem (1078 bytes)
	I0510 17:52:43.879140 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem, removing ...
	I0510 17:52:43.879154 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem
	I0510 17:52:43.879187 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem (1123 bytes)
	I0510 17:52:43.879281 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem, removing ...
	I0510 17:52:43.879293 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem
	I0510 17:52:43.879328 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem (1675 bytes)
	I0510 17:52:43.879447 1062960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem org=jenkins.newest-cni-173135 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-173135]
	I0510 17:52:44.399990 1062960 provision.go:177] copyRemoteCerts
	I0510 17:52:44.400060 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 17:52:44.400097 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.417363 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:44.509498 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 17:52:44.533816 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0510 17:52:44.556664 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 17:52:44.579844 1062960 provision.go:87] duration metric: took 719.387116ms to configureAuth
	I0510 17:52:44.579874 1062960 ubuntu.go:193] setting minikube options for container-runtime
	I0510 17:52:44.580082 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:44.580225 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.597779 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:44.597997 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:44.598015 1062960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 17:52:44.861571 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 17:52:44.861603 1062960 machine.go:96] duration metric: took 4.435849898s to provisionDockerMachine
	I0510 17:52:44.861615 1062960 start.go:293] postStartSetup for "newest-cni-173135" (driver="docker")
	I0510 17:52:44.861633 1062960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 17:52:44.861696 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 17:52:44.861741 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.880393 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:44.968863 1062960 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 17:52:44.972444 1062960 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0510 17:52:44.972471 1062960 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0510 17:52:44.972479 1062960 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0510 17:52:44.972486 1062960 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0510 17:52:44.972499 1062960 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/addons for local assets ...
	I0510 17:52:44.972551 1062960 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/files for local assets ...
	I0510 17:52:44.972632 1062960 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem -> 7298152.pem in /etc/ssl/certs
	I0510 17:52:44.972715 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 17:52:44.981250 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem --> /etc/ssl/certs/7298152.pem (1708 bytes)
	I0510 17:52:45.004513 1062960 start.go:296] duration metric: took 142.88043ms for postStartSetup
	I0510 17:52:45.004636 1062960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 17:52:45.004699 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.022563 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.108643 1062960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0510 17:52:45.113165 1062960 fix.go:56] duration metric: took 4.992266927s for fixHost
	I0510 17:52:45.113190 1062960 start.go:83] releasing machines lock for "newest-cni-173135", held for 4.992317581s
	I0510 17:52:45.113270 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:45.130656 1062960 ssh_runner.go:195] Run: cat /version.json
	I0510 17:52:45.130728 1062960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 17:52:45.130785 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.130732 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.149250 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.153557 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.235894 1062960 ssh_runner.go:195] Run: systemctl --version
	I0510 17:52:45.328928 1062960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 17:52:45.467882 1062960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0510 17:52:45.472485 1062960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 17:52:45.480914 1062960 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0510 17:52:45.480989 1062960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 17:52:45.489392 1062960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0510 17:52:45.489423 1062960 start.go:495] detecting cgroup driver to use...
	I0510 17:52:45.489464 1062960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0510 17:52:45.489535 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 17:52:45.501274 1062960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 17:52:45.512452 1062960 docker.go:225] disabling cri-docker service (if available) ...
	I0510 17:52:45.512528 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 17:52:45.524828 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 17:52:45.535636 1062960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 17:52:45.618303 1062960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 17:52:45.695586 1062960 docker.go:241] disabling docker service ...
	I0510 17:52:45.695664 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 17:52:45.707968 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 17:52:45.719029 1062960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 17:52:45.800197 1062960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 17:52:45.887455 1062960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 17:52:45.898860 1062960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 17:52:45.914760 1062960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 17:52:45.914818 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.924202 1062960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 17:52:45.924260 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.933839 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.944911 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.954202 1062960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 17:52:45.962950 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.972583 1062960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.981599 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.991016 1062960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 17:52:45.999017 1062960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 17:52:46.007316 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:46.090516 1062960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 17:52:46.208208 1062960 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 17:52:46.208290 1062960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 17:52:46.212169 1062960 start.go:563] Will wait 60s for crictl version
	I0510 17:52:46.212233 1062960 ssh_runner.go:195] Run: which crictl
	I0510 17:52:46.215714 1062960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 17:52:46.250179 1062960 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0510 17:52:46.250256 1062960 ssh_runner.go:195] Run: crio --version
	I0510 17:52:46.286288 1062960 ssh_runner.go:195] Run: crio --version
	I0510 17:52:46.324763 1062960 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.24.6 ...
	I0510 17:52:46.326001 1062960 cli_runner.go:164] Run: docker network inspect newest-cni-173135 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0510 17:52:46.342321 1062960 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0510 17:52:46.346220 1062960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:52:46.358987 1062960 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0510 17:52:46.360438 1062960 kubeadm.go:875] updating cluster {Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 17:52:46.360585 1062960 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 17:52:46.360654 1062960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:52:46.402300 1062960 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 17:52:46.402322 1062960 crio.go:433] Images already preloaded, skipping extraction
	I0510 17:52:46.402371 1062960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:52:46.438279 1062960 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 17:52:46.438310 1062960 cache_images.go:84] Images are preloaded, skipping loading
	I0510 17:52:46.438321 1062960 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.33.0 crio true true} ...
	I0510 17:52:46.438480 1062960 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-173135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 17:52:46.438582 1062960 ssh_runner.go:195] Run: crio config
	I0510 17:52:46.483257 1062960 cni.go:84] Creating CNI manager for ""
	I0510 17:52:46.483281 1062960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 17:52:46.483292 1062960 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0510 17:52:46.483315 1062960 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-173135 NodeName:newest-cni-173135 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 17:52:46.483479 1062960 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-173135"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 17:52:46.483553 1062960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 17:52:46.492414 1062960 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 17:52:46.492500 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 17:52:46.501119 1062960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0510 17:52:46.518140 1062960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 17:52:46.535112 1062960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0510 17:52:46.551871 1062960 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0510 17:52:46.555171 1062960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:52:46.565729 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:46.652845 1062960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:52:46.666063 1062960 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135 for IP: 192.168.94.2
	I0510 17:52:46.666087 1062960 certs.go:194] generating shared ca certs ...
	I0510 17:52:46.666108 1062960 certs.go:226] acquiring lock for ca certs: {Name:mk27922925b9822e089551ad68cc2984cd622bc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:46.666267 1062960 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key
	I0510 17:52:46.666346 1062960 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key
	I0510 17:52:46.666367 1062960 certs.go:256] generating profile certs ...
	I0510 17:52:46.666488 1062960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/client.key
	I0510 17:52:46.666575 1062960 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.key.eac5560e
	I0510 17:52:46.666638 1062960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.key
	I0510 17:52:46.666788 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815.pem (1338 bytes)
	W0510 17:52:46.666836 1062960 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815_empty.pem, impossibly tiny 0 bytes
	I0510 17:52:46.666855 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 17:52:46.666891 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem (1078 bytes)
	I0510 17:52:46.666924 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem (1123 bytes)
	I0510 17:52:46.666954 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem (1675 bytes)
	I0510 17:52:46.667014 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem (1708 bytes)
	I0510 17:52:46.667736 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 17:52:46.694046 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0510 17:52:46.720567 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 17:52:46.750803 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0510 17:52:46.783126 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0510 17:52:46.861172 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 17:52:46.886437 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 17:52:46.909743 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 17:52:46.932746 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem --> /usr/share/ca-certificates/7298152.pem (1708 bytes)
	I0510 17:52:46.955864 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 17:52:46.978875 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815.pem --> /usr/share/ca-certificates/729815.pem (1338 bytes)
	I0510 17:52:47.001846 1062960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 17:52:47.018936 1062960 ssh_runner.go:195] Run: openssl version
	I0510 17:52:47.024207 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 17:52:47.033345 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.036756 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 16:54 /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.036814 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.043306 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 17:52:47.051810 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/729815.pem && ln -fs /usr/share/ca-certificates/729815.pem /etc/ssl/certs/729815.pem"
	I0510 17:52:47.060972 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.064315 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 17:06 /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.064361 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.070986 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/729815.pem /etc/ssl/certs/51391683.0"
	I0510 17:52:47.079952 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7298152.pem && ln -fs /usr/share/ca-certificates/7298152.pem /etc/ssl/certs/7298152.pem"
	I0510 17:52:47.089676 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.093441 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 17:06 /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.093504 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.100198 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7298152.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 17:52:47.108827 1062960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 17:52:47.112497 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 17:52:47.119081 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 17:52:47.125525 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 17:52:47.131948 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 17:52:47.138247 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 17:52:47.145052 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 17:52:47.152189 1062960 kubeadm.go:392] StartCluster: {Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:47.152299 1062960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 17:52:47.152356 1062960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 17:52:47.190954 1062960 cri.go:89] found id: ""
	I0510 17:52:47.191057 1062960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 17:52:47.200662 1062960 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0510 17:52:47.200683 1062960 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0510 17:52:47.200729 1062960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0510 17:52:47.210371 1062960 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0510 17:52:47.211583 1062960 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-173135" does not appear in /home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:47.212205 1062960 kubeconfig.go:62] /home/jenkins/minikube-integration/20720-722920/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-173135" cluster setting kubeconfig missing "newest-cni-173135" context setting]
	I0510 17:52:47.213167 1062960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/kubeconfig: {Name:mk9fb87a04495b85d7d2d831cf7e181b64e065fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.215451 1062960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0510 17:52:47.225765 1062960 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0510 17:52:47.225809 1062960 kubeadm.go:593] duration metric: took 25.118512ms to restartPrimaryControlPlane
	I0510 17:52:47.225823 1062960 kubeadm.go:394] duration metric: took 73.645898ms to StartCluster
	I0510 17:52:47.225844 1062960 settings.go:142] acquiring lock: {Name:mkb5ef074e3901ac961cf1a29314fa6c725c1890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.225925 1062960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:47.227600 1062960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/kubeconfig: {Name:mk9fb87a04495b85d7d2d831cf7e181b64e065fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.227929 1062960 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 17:52:47.228146 1062960 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0510 17:52:47.228262 1062960 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-173135"
	I0510 17:52:47.228286 1062960 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-173135"
	W0510 17:52:47.228300 1062960 addons.go:247] addon storage-provisioner should already be in state true
	I0510 17:52:47.228322 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:47.228340 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.228374 1062960 addons.go:69] Setting default-storageclass=true in profile "newest-cni-173135"
	I0510 17:52:47.228389 1062960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-173135"
	I0510 17:52:47.228696 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.228794 1062960 addons.go:69] Setting metrics-server=true in profile "newest-cni-173135"
	I0510 17:52:47.228819 1062960 addons.go:238] Setting addon metrics-server=true in "newest-cni-173135"
	W0510 17:52:47.228830 1062960 addons.go:247] addon metrics-server should already be in state true
	I0510 17:52:47.228871 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.228905 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.229098 1062960 addons.go:69] Setting dashboard=true in profile "newest-cni-173135"
	I0510 17:52:47.229122 1062960 addons.go:238] Setting addon dashboard=true in "newest-cni-173135"
	W0510 17:52:47.229131 1062960 addons.go:247] addon dashboard should already be in state true
	I0510 17:52:47.229160 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.229350 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.229636 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.231952 1062960 out.go:177] * Verifying Kubernetes components...
	I0510 17:52:47.233708 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:47.257836 1062960 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0510 17:52:47.259786 1062960 addons.go:238] Setting addon default-storageclass=true in "newest-cni-173135"
	W0510 17:52:47.259808 1062960 addons.go:247] addon default-storageclass should already be in state true
	I0510 17:52:47.259842 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.260502 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0510 17:52:47.260520 1062960 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0510 17:52:47.260587 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.260894 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.269485 1062960 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0510 17:52:47.270561 1062960 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0510 17:52:47.271826 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0510 17:52:47.271848 1062960 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0510 17:52:47.271913 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.273848 1062960 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 17:52:47.275490 1062960 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:52:47.275521 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0510 17:52:47.275721 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.287652 1062960 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:47.287676 1062960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0510 17:52:47.287737 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.300295 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.308088 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.314958 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.317183 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.570630 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:47.644300 1062960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:52:47.648111 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0510 17:52:47.648144 1062960 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0510 17:52:47.745020 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0510 17:52:47.745054 1062960 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0510 17:52:47.746206 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:52:47.753235 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0510 17:52:47.753267 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0510 17:52:47.852275 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0510 17:52:47.852309 1062960 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0510 17:52:47.854261 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0510 17:52:47.854291 1062960 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0510 17:52:47.957529 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0510 17:52:47.957561 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0510 17:52:47.962427 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:52:47.962453 1062960 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0510 17:52:47.967141 1062960 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0510 17:52:47.967185 1062960 retry.go:31] will retry after 329.411117ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0510 17:52:47.967271 1062960 api_server.go:52] waiting for apiserver process to appear ...
	I0510 17:52:47.967381 1062960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 17:52:48.055318 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0510 17:52:48.055400 1062960 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0510 17:52:48.060787 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:52:48.149914 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0510 17:52:48.149947 1062960 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0510 17:52:48.175035 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0510 17:52:48.175070 1062960 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0510 17:52:48.263718 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0510 17:52:48.263750 1062960 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0510 17:52:48.282195 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 17:52:48.282227 1062960 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0510 17:52:48.297636 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:48.359369 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 17:52:52.345196 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.598944537s)
	I0510 17:52:52.345534 1062960 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.378119806s)
	I0510 17:52:52.345610 1062960 api_server.go:72] duration metric: took 5.117639828s to wait for apiserver process to appear ...
	I0510 17:52:52.345622 1062960 api_server.go:88] waiting for apiserver healthz status ...
	I0510 17:52:52.345683 1062960 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0510 17:52:52.350659 1062960 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0510 17:52:52.350693 1062960 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0510 17:52:52.462305 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.401465129s)
	I0510 17:52:52.462425 1062960 addons.go:479] Verifying addon metrics-server=true in "newest-cni-173135"
	I0510 17:52:52.462366 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.164694895s)
	I0510 17:52:52.558877 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.199364581s)
	I0510 17:52:52.560719 1062960 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-173135 addons enable metrics-server
	
	I0510 17:52:52.562364 1062960 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0510 17:52:52.563698 1062960 addons.go:514] duration metric: took 5.33556927s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0510 17:52:52.846151 1062960 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0510 17:52:52.850590 1062960 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0510 17:52:52.851935 1062960 api_server.go:141] control plane version: v1.33.0
	I0510 17:52:52.851968 1062960 api_server.go:131] duration metric: took 506.335848ms to wait for apiserver health ...
	I0510 17:52:52.851979 1062960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 17:52:52.855964 1062960 system_pods.go:59] 9 kube-system pods found
	I0510 17:52:52.856013 1062960 system_pods.go:61] "coredns-674b8bbfcf-l2m27" [11b63e72-35af-4a70-a7d3-b11e18104e2e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856039 1062960 system_pods.go:61] "etcd-newest-cni-173135" [60c35044-778d-45d4-8d96-e58efbd9b54b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 17:52:52.856062 1062960 system_pods.go:61] "kindnet-5nzlt" [9158a53c-5cd1-426c-a255-37618e292899] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0510 17:52:52.856073 1062960 system_pods.go:61] "kube-apiserver-newest-cni-173135" [790eeefa-f593-4148-b5f3-43bf9807166f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 17:52:52.856085 1062960 system_pods.go:61] "kube-controller-manager-newest-cni-173135" [75bdb232-66d8-442a-8566-34a3d4674876] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 17:52:52.856096 1062960 system_pods.go:61] "kube-proxy-v2tt7" [e502d755-4ecb-4567-9259-547f7c063830] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0510 17:52:52.856108 1062960 system_pods.go:61] "kube-scheduler-newest-cni-173135" [8bfc0953-197d-4185-b2e7-6e1a2d97a8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 17:52:52.856117 1062960 system_pods.go:61] "metrics-server-f79f97bbb-z4g7z" [a6bcfd5e-6f32-43ef-a6e7-336c90faf9ff] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856125 1062960 system_pods.go:61] "storage-provisioner" [effda141-cd8d-4f87-97a1-9166c59e1de0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856132 1062960 system_pods.go:74] duration metric: took 4.146105ms to wait for pod list to return data ...
	I0510 17:52:52.856143 1062960 default_sa.go:34] waiting for default service account to be created ...
	I0510 17:52:52.858633 1062960 default_sa.go:45] found service account: "default"
	I0510 17:52:52.858658 1062960 default_sa.go:55] duration metric: took 2.507165ms for default service account to be created ...
	I0510 17:52:52.858670 1062960 kubeadm.go:578] duration metric: took 5.630701473s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0510 17:52:52.858701 1062960 node_conditions.go:102] verifying NodePressure condition ...
	I0510 17:52:52.861375 1062960 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0510 17:52:52.861398 1062960 node_conditions.go:123] node cpu capacity is 8
	I0510 17:52:52.861411 1062960 node_conditions.go:105] duration metric: took 2.704535ms to run NodePressure ...
	I0510 17:52:52.861422 1062960 start.go:241] waiting for startup goroutines ...
	I0510 17:52:52.861431 1062960 start.go:246] waiting for cluster config update ...
	I0510 17:52:52.861444 1062960 start.go:255] writing updated cluster config ...
	I0510 17:52:52.861692 1062960 ssh_runner.go:195] Run: rm -f paused
	I0510 17:52:52.918445 1062960 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 17:52:52.920711 1062960 out.go:177] * Done! kubectl is now configured to use "newest-cni-173135" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 10 18:08:43 embed-certs-256321 crio[675]: time="2025-05-10 18:08:43.269709374Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=91556cfa-d437-424a-ad97-bbebbce1b9db name=/runtime.v1.ImageService/ImageStatus
	May 10 18:08:48 embed-certs-256321 crio[675]: time="2025-05-10 18:08:48.270321051Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=840bcee5-4298-411a-8d1a-13f97aba753f name=/runtime.v1.ImageService/ImageStatus
	May 10 18:08:48 embed-certs-256321 crio[675]: time="2025-05-10 18:08:48.270606546Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=840bcee5-4298-411a-8d1a-13f97aba753f name=/runtime.v1.ImageService/ImageStatus
	May 10 18:08:54 embed-certs-256321 crio[675]: time="2025-05-10 18:08:54.270209063Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=448cbc54-2c8a-4b68-b73f-1b73fb6bf480 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:08:54 embed-certs-256321 crio[675]: time="2025-05-10 18:08:54.270512619Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=448cbc54-2c8a-4b68-b73f-1b73fb6bf480 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:08:59 embed-certs-256321 crio[675]: time="2025-05-10 18:08:59.270383679Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=639ebd66-a037-4370-b93d-18f7e55f285c name=/runtime.v1.ImageService/ImageStatus
	May 10 18:08:59 embed-certs-256321 crio[675]: time="2025-05-10 18:08:59.270684182Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=639ebd66-a037-4370-b93d-18f7e55f285c name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:05 embed-certs-256321 crio[675]: time="2025-05-10 18:09:05.269731327Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=43fc7543-1c36-4a2d-8c8b-2af2f5fce140 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:05 embed-certs-256321 crio[675]: time="2025-05-10 18:09:05.270019957Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=43fc7543-1c36-4a2d-8c8b-2af2f5fce140 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:12 embed-certs-256321 crio[675]: time="2025-05-10 18:09:12.270144670Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=414d8880-4345-4ac7-a14b-c288c9ad054b name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:12 embed-certs-256321 crio[675]: time="2025-05-10 18:09:12.270430451Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=414d8880-4345-4ac7-a14b-c288c9ad054b name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:20 embed-certs-256321 crio[675]: time="2025-05-10 18:09:20.270039464Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ffd226a5-4fac-4898-b480-6b6f6ef224fa name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:20 embed-certs-256321 crio[675]: time="2025-05-10 18:09:20.270368487Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=ffd226a5-4fac-4898-b480-6b6f6ef224fa name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:24 embed-certs-256321 crio[675]: time="2025-05-10 18:09:24.270130904Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=b78ec1a8-8e02-430e-9086-b633f3d17324 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:24 embed-certs-256321 crio[675]: time="2025-05-10 18:09:24.270378211Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=b78ec1a8-8e02-430e-9086-b633f3d17324 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:33 embed-certs-256321 crio[675]: time="2025-05-10 18:09:33.269694424Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b6b7327e-f2c8-4d2f-8322-776d36b61d58 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:33 embed-certs-256321 crio[675]: time="2025-05-10 18:09:33.270055514Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b6b7327e-f2c8-4d2f-8322-776d36b61d58 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:38 embed-certs-256321 crio[675]: time="2025-05-10 18:09:38.270041030Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=179b6b97-0780-4e76-9f63-b01e938d05b9 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:38 embed-certs-256321 crio[675]: time="2025-05-10 18:09:38.270246938Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=179b6b97-0780-4e76-9f63-b01e938d05b9 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:46 embed-certs-256321 crio[675]: time="2025-05-10 18:09:46.270390307Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=39a0caf6-f8f3-47ce-9807-0c5f4f2cbc0a name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:46 embed-certs-256321 crio[675]: time="2025-05-10 18:09:46.270771295Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=39a0caf6-f8f3-47ce-9807-0c5f4f2cbc0a name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:50 embed-certs-256321 crio[675]: time="2025-05-10 18:09:50.270115087Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=2202f3ad-5aca-450a-8152-64cc5b062a76 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:50 embed-certs-256321 crio[675]: time="2025-05-10 18:09:50.270328243Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=2202f3ad-5aca-450a-8152-64cc5b062a76 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:57 embed-certs-256321 crio[675]: time="2025-05-10 18:09:57.269537682Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=fd6da2b7-ef60-4643-ab24-0e78eedf1dcb name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:57 embed-certs-256321 crio[675]: time="2025-05-10 18:09:57.269840810Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=fd6da2b7-ef60-4643-ab24-0e78eedf1dcb name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	0b240f4cd4893       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   8                   91535c8944806       dashboard-metrics-scraper-86c6bf9756-8cgkk
	d9b57107e62b1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner         2                   1ec93a8aee78a       storage-provisioner
	72e8906e39fad       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   18 minutes ago      Running             coredns                     1                   20e201db64160       coredns-674b8bbfcf-p95ml
	32ba395c226fe       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago      Running             busybox                     1                   9993c47e23f6c       busybox
	bff80d566cd79       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68   18 minutes ago      Running             kube-proxy                  1                   57606f067b007       kube-proxy-4r9lw
	2e6f6081751ab       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f   18 minutes ago      Running             kindnet-cni                 1                   2e987c81482cc       kindnet-gz4vh
	65d5a65ecf063       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner         1                   1ec93a8aee78a       storage-provisioner
	b9151b983cbd7       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02   18 minutes ago      Running             kube-controller-manager     1                   150a5fe20345e       kube-controller-manager-embed-certs-256321
	98130845020bf       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4   18 minutes ago      Running             kube-apiserver              1                   c2c4389db1e8d       kube-apiserver-embed-certs-256321
	a5fd3191197b5       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   18 minutes ago      Running             etcd                        1                   b26c1247c448b       etcd-embed-certs-256321
	b210e16e87728       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4   18 minutes ago      Running             kube-scheduler              1                   b6ecefecbcc99       kube-scheduler-embed-certs-256321
	
	
	==> coredns [72e8906e39fadba197e2807b95680114dec737c392e60b99240271e920481151] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:34581 - 36001 "HINFO IN 1083514910540834653.3508312887700292770. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056057215s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-256321
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-256321
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=embed-certs-256321
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T17_50_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 17:50:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-256321
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 18:09:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 18:07:07 +0000   Sat, 10 May 2025 17:50:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 18:07:07 +0000   Sat, 10 May 2025 17:50:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 18:07:07 +0000   Sat, 10 May 2025 17:50:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 18:07:07 +0000   Sat, 10 May 2025 17:50:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-256321
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	System Info:
	  Machine ID:                 8bd94e85a446493a8ec17c6b0e53f440
	  System UUID:                f0aac67c-af15-467d-8e38-520b3e855bab
	  Boot ID:                    cf43504f-fb83-4d4b-9ff6-27d975437043
	  Kernel Version:             5.15.0-1081-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-674b8bbfcf-p95ml                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-embed-certs-256321                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-gz4vh                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-embed-certs-256321             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-embed-certs-256321    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-4r9lw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-embed-certs-256321             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-f79f97bbb-cts6m                100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-8cgkk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-cmxkz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 19m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 19m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19m (x9 over 19m)  kubelet          Node embed-certs-256321 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node embed-certs-256321 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node embed-certs-256321 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    19m                kubelet          Node embed-certs-256321 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 19m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19m                kubelet          Node embed-certs-256321 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     19m                kubelet          Node embed-certs-256321 status is now: NodeHasSufficientPID
	  Normal   Starting                 19m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           19m                node-controller  Node embed-certs-256321 event: Registered Node embed-certs-256321 in Controller
	  Normal   NodeReady                19m                kubelet          Node embed-certs-256321 status is now: NodeReady
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-256321 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-256321 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node embed-certs-256321 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           18m                node-controller  Node embed-certs-256321 event: Registered Node embed-certs-256321 in Controller
	
	
	==> dmesg <==
	[  +1.019813] net_ratelimit: 3 callbacks suppressed
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000003] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000002] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +4.095573] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000007] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000001] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000002] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +3.075626] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000001] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +0.000001] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000002] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +1.019906] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000006] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	
	
	==> etcd [a5fd3191197b5cbef87a2bcc3b8106b810ee03e659a75e84f00ef7ee10c9e4c4] <==
	{"level":"info","ts":"2025-05-10T17:51:15.681840Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T17:51:15.681906Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T17:51:16.987883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-05-10T17:51:16.988055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T17:51:16.988148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-05-10T17:51:16.988224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.988287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.988328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.988369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.989748Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-256321 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:51:16.989964Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:51:16.990947Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:51:16.991057Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:51:16.991104Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:51:16.992011Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:51:16.992596Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:51:16.998365Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:51:16.996873Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-05-10T17:52:12.755966Z","caller":"traceutil/trace.go:171","msg":"trace[1146970826] transaction","detail":"{read_only:false; response_revision:722; number_of_response:1; }","duration":"121.70152ms","start":"2025-05-10T17:52:12.634238Z","end":"2025-05-10T17:52:12.755939Z","steps":["trace[1146970826] 'process raft request'  (duration: 60.043883ms)","trace[1146970826] 'compare'  (duration: 61.531583ms)"],"step_count":2}
	{"level":"info","ts":"2025-05-10T18:01:17.058691Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1025}
	{"level":"info","ts":"2025-05-10T18:01:17.078916Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":1025,"took":"19.927938ms","hash":4140183890,"current-db-size-bytes":3584000,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1474560,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-05-10T18:01:17.078978Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":4140183890,"revision":1025,"compact-revision":-1}
	{"level":"info","ts":"2025-05-10T18:06:17.063890Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1305}
	{"level":"info","ts":"2025-05-10T18:06:17.066606Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":1305,"took":"2.43481ms","hash":1402546937,"current-db-size-bytes":3584000,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2031616,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-05-10T18:06:17.066641Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1402546937,"revision":1305,"compact-revision":1025}
	
	
	==> kernel <==
	 18:10:00 up  3:52,  0 users,  load average: 0.62, 0.62, 2.09
	Linux embed-certs-256321 5.15.0-1081-gcp #90~20.04.1-Ubuntu SMP Fri Apr 4 18:55:17 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [2e6f6081751ab3dd4a46da09cb4f2d486b3687d166d051a39658de4b696f8fa9] <==
	I0510 18:07:52.371545       1 main.go:301] handling current node
	I0510 18:08:02.371595       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:08:02.371628       1 main.go:301] handling current node
	I0510 18:08:12.364449       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:08:12.364495       1 main.go:301] handling current node
	I0510 18:08:22.364791       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:08:22.364841       1 main.go:301] handling current node
	I0510 18:08:32.367499       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:08:32.367552       1 main.go:301] handling current node
	I0510 18:08:42.371504       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:08:42.371545       1 main.go:301] handling current node
	I0510 18:08:52.371526       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:08:52.371565       1 main.go:301] handling current node
	I0510 18:09:02.372515       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:09:02.372554       1 main.go:301] handling current node
	I0510 18:09:12.371536       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:09:12.371574       1 main.go:301] handling current node
	I0510 18:09:22.364825       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:09:22.364860       1 main.go:301] handling current node
	I0510 18:09:32.367821       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:09:32.367878       1 main.go:301] handling current node
	I0510 18:09:42.373348       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:09:42.373395       1 main.go:301] handling current node
	I0510 18:09:52.371653       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0510 18:09:52.371689       1 main.go:301] handling current node
	
	
	==> kube-apiserver [98130845020bfd267f9f378931eeb53eaa3893e68929464d0cb566065d00d6ad] <==
	E0510 18:06:20.803261       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0510 18:06:20.803378       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0510 18:06:20.804379       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 18:06:20.804424       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0510 18:07:20.805297       1 handler_proxy.go:99] no RequestInfo found in the context
	W0510 18:07:20.805297       1 handler_proxy.go:99] no RequestInfo found in the context
	E0510 18:07:20.805394       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0510 18:07:20.805397       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0510 18:07:20.806509       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 18:07:20.806532       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0510 18:09:20.807078       1 handler_proxy.go:99] no RequestInfo found in the context
	W0510 18:09:20.807088       1 handler_proxy.go:99] no RequestInfo found in the context
	E0510 18:09:20.807133       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0510 18:09:20.807200       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0510 18:09:20.808181       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 18:09:20.808239       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b9151b983cbd7c76d6ad0b5e6cfe26884bf60f230c413c6cdb1c2d656894acbe] <==
	I0510 18:03:55.703140       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:04:25.230035       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:04:25.709509       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:04:55.235265       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:04:55.717364       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:05:25.241280       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:05:25.724619       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:05:55.246538       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:05:55.732110       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:06:25.251747       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:06:25.738969       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:06:55.256895       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:06:55.746202       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:07:25.262814       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:07:25.753890       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:07:55.268339       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:07:55.760966       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:08:25.273999       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:08:25.768850       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:08:55.279455       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:08:55.776060       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:09:25.285661       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:09:25.782564       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:09:55.291775       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:09:55.789317       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [bff80d566cd7912b97b844e6de59e8652f0e6a7b718b5e30a5f2ba765dfdb71e] <==
	I0510 17:51:21.870802       1 server_linux.go:63] "Using iptables proxy"
	I0510 17:51:22.185314       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0510 17:51:22.185392       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:51:22.455859       1 server.go:254] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0510 17:51:22.456001       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:51:22.546609       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:51:22.547081       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:51:22.547122       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:51:22.548802       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:51:22.548946       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:51:22.548882       1 config.go:199] "Starting service config controller"
	I0510 17:51:22.549067       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:51:22.548890       1 config.go:329] "Starting node config controller"
	I0510 17:51:22.549212       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:51:22.549035       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:51:22.549339       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:51:22.649113       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:51:22.649203       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:51:22.649314       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0510 17:51:22.650386       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b210e16e877287bbae17030b542059e839ae27acda0111b26e777258af9f7e2f] <==
	I0510 17:51:17.893416       1 serving.go:386] Generated self-signed cert in-memory
	I0510 17:51:22.271568       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 17:51:22.271720       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:51:22.279578       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0510 17:51:22.279614       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:51:22.279625       1 shared_informer.go:350] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0510 17:51:22.279633       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:51:22.279658       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0510 17:51:22.279667       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0510 17:51:22.280043       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 17:51:22.280126       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 17:51:22.380034       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0510 17:51:22.380170       1 shared_informer.go:357] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0510 17:51:22.380898       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	May 10 18:09:09 embed-certs-256321 kubelet[813]: E0510 18:09:09.270064     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-8cgkk_kubernetes-dashboard(3a17b903-4797-436e-9d01-33bbf8aba9f3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-8cgkk" podUID="3a17b903-4797-436e-9d01-33bbf8aba9f3"
	May 10 18:09:12 embed-certs-256321 kubelet[813]: E0510 18:09:12.270782     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cts6m" podUID="9f1f391b-b287-4d6a-9ee2-2b0d20b7f6f6"
	May 10 18:09:14 embed-certs-256321 kubelet[813]: E0510 18:09:14.300796     813 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900554300602720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:14 embed-certs-256321 kubelet[813]: E0510 18:09:14.300839     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900554300602720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:20 embed-certs-256321 kubelet[813]: E0510 18:09:20.270654     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-cmxkz" podUID="c123f562-4744-4a16-98d1-fce9d4f44d5c"
	May 10 18:09:23 embed-certs-256321 kubelet[813]: I0510 18:09:23.269797     813 scope.go:117] "RemoveContainer" containerID="0b240f4cd4893aeb084f8aee4f29c011b649d8b4069978ff6492abcf0b4240a6"
	May 10 18:09:23 embed-certs-256321 kubelet[813]: E0510 18:09:23.270077     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-8cgkk_kubernetes-dashboard(3a17b903-4797-436e-9d01-33bbf8aba9f3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-8cgkk" podUID="3a17b903-4797-436e-9d01-33bbf8aba9f3"
	May 10 18:09:24 embed-certs-256321 kubelet[813]: E0510 18:09:24.270641     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cts6m" podUID="9f1f391b-b287-4d6a-9ee2-2b0d20b7f6f6"
	May 10 18:09:24 embed-certs-256321 kubelet[813]: E0510 18:09:24.302032     813 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900564301816894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:24 embed-certs-256321 kubelet[813]: E0510 18:09:24.302080     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900564301816894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:33 embed-certs-256321 kubelet[813]: E0510 18:09:33.270442     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-cmxkz" podUID="c123f562-4744-4a16-98d1-fce9d4f44d5c"
	May 10 18:09:34 embed-certs-256321 kubelet[813]: E0510 18:09:34.303591     813 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900574303335033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:34 embed-certs-256321 kubelet[813]: E0510 18:09:34.303638     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900574303335033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:36 embed-certs-256321 kubelet[813]: I0510 18:09:36.269234     813 scope.go:117] "RemoveContainer" containerID="0b240f4cd4893aeb084f8aee4f29c011b649d8b4069978ff6492abcf0b4240a6"
	May 10 18:09:36 embed-certs-256321 kubelet[813]: E0510 18:09:36.269475     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-8cgkk_kubernetes-dashboard(3a17b903-4797-436e-9d01-33bbf8aba9f3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-8cgkk" podUID="3a17b903-4797-436e-9d01-33bbf8aba9f3"
	May 10 18:09:38 embed-certs-256321 kubelet[813]: E0510 18:09:38.270488     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cts6m" podUID="9f1f391b-b287-4d6a-9ee2-2b0d20b7f6f6"
	May 10 18:09:44 embed-certs-256321 kubelet[813]: E0510 18:09:44.304518     813 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900584304328018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:44 embed-certs-256321 kubelet[813]: E0510 18:09:44.304562     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900584304328018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:46 embed-certs-256321 kubelet[813]: E0510 18:09:46.271104     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-cmxkz" podUID="c123f562-4744-4a16-98d1-fce9d4f44d5c"
	May 10 18:09:50 embed-certs-256321 kubelet[813]: I0510 18:09:50.269862     813 scope.go:117] "RemoveContainer" containerID="0b240f4cd4893aeb084f8aee4f29c011b649d8b4069978ff6492abcf0b4240a6"
	May 10 18:09:50 embed-certs-256321 kubelet[813]: E0510 18:09:50.270077     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-8cgkk_kubernetes-dashboard(3a17b903-4797-436e-9d01-33bbf8aba9f3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-8cgkk" podUID="3a17b903-4797-436e-9d01-33bbf8aba9f3"
	May 10 18:09:50 embed-certs-256321 kubelet[813]: E0510 18:09:50.270630     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-cts6m" podUID="9f1f391b-b287-4d6a-9ee2-2b0d20b7f6f6"
	May 10 18:09:54 embed-certs-256321 kubelet[813]: E0510 18:09:54.305517     813 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900594305344977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:54 embed-certs-256321 kubelet[813]: E0510 18:09:54.305558     813 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900594305344977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:57 embed-certs-256321 kubelet[813]: E0510 18:09:57.270250     813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-cmxkz" podUID="c123f562-4744-4a16-98d1-fce9d4f44d5c"
	
	
	==> storage-provisioner [65d5a65ecf063411062857526ea3f59338a709368676895138b1e2978719d99f] <==
	I0510 17:51:21.157830       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0510 17:51:51.160969       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d9b57107e62b11acfb2b0469a6d55921e056261237dd7c55ed30e6e552460968] <==
	W0510 18:09:34.920647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:36.923935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:36.927667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:38.930683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:38.935648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:40.938401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:40.942224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:42.945325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:42.950560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:44.953382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:44.957464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:46.960656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:46.966265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:48.969221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:48.973605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:50.976291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:50.980173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:52.984584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:52.989182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:54.991827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:54.995901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:56.999068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:57.005053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:59.008820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:59.013063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-256321 -n embed-certs-256321
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-256321 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-cts6m kubernetes-dashboard-7779f9b69b-cmxkz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-256321 describe pod metrics-server-f79f97bbb-cts6m kubernetes-dashboard-7779f9b69b-cmxkz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-256321 describe pod metrics-server-f79f97bbb-cts6m kubernetes-dashboard-7779f9b69b-cmxkz: exit status 1 (58.434858ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-cts6m" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-cmxkz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-256321 describe pod metrics-server-f79f97bbb-cts6m kubernetes-dashboard-7779f9b69b-cmxkz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-tq4tr" [62e44ee1-f320-4a22-bf54-04c5efdd417e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0510 18:01:04.002645  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:01:21.807687  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-676255 -n default-k8s-diff-port-676255
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-05-10 18:10:02.265798611 +0000 UTC m=+4541.947585736
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-676255 describe po kubernetes-dashboard-7779f9b69b-tq4tr -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context default-k8s-diff-port-676255 describe po kubernetes-dashboard-7779f9b69b-tq4tr -n kubernetes-dashboard:
Name:             kubernetes-dashboard-7779f9b69b-tq4tr
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-676255/192.168.85.2
Start Time:       Sat, 10 May 2025 17:51:23 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=7779f9b69b
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-7779f9b69b
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n9npx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-n9npx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-tq4tr to default-k8s-diff-port-676255
Warning  Failed     15m (x4 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     13m (x5 over 18m)     kubelet            Error: ErrImagePull
Warning  Failed     13m                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    3m37s (x48 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m2s (x51 over 18m)   kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-676255 logs kubernetes-dashboard-7779f9b69b-tq4tr -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-676255 logs kubernetes-dashboard-7779f9b69b-tq4tr -n kubernetes-dashboard: exit status 1 (72.468888ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-7779f9b69b-tq4tr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-676255 logs kubernetes-dashboard-7779f9b69b-tq4tr -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-676255 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-676255
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-676255:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "55e52f167cfa5102ec6f7202bb5267477654c6defa3da173ba13197c4ad08a42",
	        "Created": "2025-05-10T17:50:07.037978917Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1049059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-05-10T17:51:06.553322475Z",
	            "FinishedAt": "2025-05-10T17:51:05.137009895Z"
	        },
	        "Image": "sha256:e9e814e304601d171cd7a05fe946703c6fbd63c3e77415c5bcfe31c3cddbbe5f",
	        "ResolvConfPath": "/var/lib/docker/containers/55e52f167cfa5102ec6f7202bb5267477654c6defa3da173ba13197c4ad08a42/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/55e52f167cfa5102ec6f7202bb5267477654c6defa3da173ba13197c4ad08a42/hostname",
	        "HostsPath": "/var/lib/docker/containers/55e52f167cfa5102ec6f7202bb5267477654c6defa3da173ba13197c4ad08a42/hosts",
	        "LogPath": "/var/lib/docker/containers/55e52f167cfa5102ec6f7202bb5267477654c6defa3da173ba13197c4ad08a42/55e52f167cfa5102ec6f7202bb5267477654c6defa3da173ba13197c4ad08a42-json.log",
	        "Name": "/default-k8s-diff-port-676255",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-676255:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-676255",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "55e52f167cfa5102ec6f7202bb5267477654c6defa3da173ba13197c4ad08a42",
	                "LowerDir": "/var/lib/docker/overlay2/c9ec54734a7feddb0390966d849699a3799a8f795769ea69d03666c36131a50b-init/diff:/var/lib/docker/overlay2/d562a19931b28d74981554e3e67ffc7804c8c483ec96f024e40ef2be1bf23f73/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9ec54734a7feddb0390966d849699a3799a8f795769ea69d03666c36131a50b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9ec54734a7feddb0390966d849699a3799a8f795769ea69d03666c36131a50b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9ec54734a7feddb0390966d849699a3799a8f795769ea69d03666c36131a50b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-676255",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-676255/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-676255",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-676255",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-676255",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c3cc0b8dac914ea8c60a09224e24b56ee3cada2d7961ab187d7fd7457623144",
	            "SandboxKey": "/var/run/docker/netns/7c3cc0b8dac9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-676255": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:a7:b0:8c:49:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c98d5c048caae1ea71a2ab5aaa214a59875742935cdb12b5c62117591aa8de39",
	                    "EndpointID": "fe27426d6520b980d7550b47009f5bfafaf84cd4511452c95376b69af7395b3d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-676255",
	                        "55e52f167cfa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-676255 -n default-k8s-diff-port-676255
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-676255 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-676255 logs -n 25: (1.313915437s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 17:50 UTC |                     |
	|         | default-k8s-diff-port-676255                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-058078                  | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:50 UTC | 10 May 25 17:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-676255       | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 17:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-256321                 | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 17:51 UTC | 10 May 25 17:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-256321                                  | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 17:51 UTC | 10 May 25 17:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 17:51 UTC | 10 May 25 17:51 UTC |
	|         | default-k8s-diff-port-676255                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| image   | no-preload-058078 image list                           | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| delete  | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| start   | -p newest-cni-173135 --memory=2200 --alsologtostderr   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-173135             | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-173135                  | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-173135 --memory=2200 --alsologtostderr   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-173135 image list                           | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| delete  | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| image   | embed-certs-256321 image list                          | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 18:10 UTC | 10 May 25 18:10 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-256321                                  | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 18:10 UTC | 10 May 25 18:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-256321                                  | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 18:10 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:52:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:52:39.942859 1062960 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:52:39.943098 1062960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:39.943129 1062960 out.go:358] Setting ErrFile to fd 2...
	I0510 17:52:39.943146 1062960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:39.943562 1062960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:52:39.944604 1062960 out.go:352] Setting JSON to false
	I0510 17:52:39.945997 1062960 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12907,"bootTime":1746886653,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:52:39.946130 1062960 start.go:140] virtualization: kvm guest
	I0510 17:52:39.948309 1062960 out.go:177] * [newest-cni-173135] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:52:39.949674 1062960 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:52:39.949716 1062960 notify.go:220] Checking for updates...
	I0510 17:52:39.952354 1062960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:52:39.953722 1062960 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:39.955058 1062960 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 17:52:39.956484 1062960 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:52:39.957799 1062960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:52:39.959587 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:39.960145 1062960 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:52:39.985577 1062960 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 17:52:39.985704 1062960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:52:40.035501 1062960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-05-10 17:52:40.02617924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:52:40.035611 1062960 docker.go:318] overlay module found
	I0510 17:52:40.037784 1062960 out.go:177] * Using the docker driver based on existing profile
	I0510 17:52:40.039108 1062960 start.go:304] selected driver: docker
	I0510 17:52:40.039123 1062960 start.go:908] validating driver "docker" against &{Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:40.039239 1062960 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:52:40.040135 1062960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:52:40.092965 1062960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-05-10 17:52:40.084143213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:52:40.093291 1062960 start_flags.go:994] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0510 17:52:40.093320 1062960 cni.go:84] Creating CNI manager for ""
	I0510 17:52:40.093383 1062960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 17:52:40.093421 1062960 start.go:347] cluster config:
	{Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:40.096146 1062960 out.go:177] * Starting "newest-cni-173135" primary control-plane node in "newest-cni-173135" cluster
	I0510 17:52:40.097483 1062960 cache.go:121] Beginning downloading kic base image for docker with crio
	I0510 17:52:40.098838 1062960 out.go:177] * Pulling base image v0.0.46-1746731792-20718 ...
	I0510 17:52:40.100016 1062960 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 17:52:40.100054 1062960 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 17:52:40.100073 1062960 cache.go:56] Caching tarball of preloaded images
	I0510 17:52:40.100128 1062960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local docker daemon
	I0510 17:52:40.100157 1062960 preload.go:172] Found /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 17:52:40.100165 1062960 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 17:52:40.100261 1062960 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/config.json ...
	I0510 17:52:40.120688 1062960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local docker daemon, skipping pull
	I0510 17:52:40.120714 1062960 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 exists in daemon, skipping load
	I0510 17:52:40.120734 1062960 cache.go:230] Successfully downloaded all kic artifacts
	I0510 17:52:40.120784 1062960 start.go:360] acquireMachinesLock for newest-cni-173135: {Name:mk75975d6daf4063f8ba79544d03229010ceb1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:52:40.120860 1062960 start.go:364] duration metric: took 50.497µs to acquireMachinesLock for "newest-cni-173135"
	I0510 17:52:40.120885 1062960 start.go:96] Skipping create...Using existing machine configuration
	I0510 17:52:40.120892 1062960 fix.go:54] fixHost starting: 
	I0510 17:52:40.121107 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:40.139354 1062960 fix.go:112] recreateIfNeeded on newest-cni-173135: state=Stopped err=<nil>
	W0510 17:52:40.139386 1062960 fix.go:138] unexpected machine state, will restart: <nil>
	I0510 17:52:40.141294 1062960 out.go:177] * Restarting existing docker container for "newest-cni-173135" ...
	W0510 17:52:39.629875 1044308 pod_ready.go:104] pod "etcd-old-k8s-version-697935" is not "Ready", error: <nil>
	W0510 17:52:41.630228 1044308 pod_ready.go:104] pod "etcd-old-k8s-version-697935" is not "Ready", error: <nil>
	I0510 17:52:43.131391 1044308 pod_ready.go:94] pod "etcd-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.131443 1044308 pod_ready.go:86] duration metric: took 50.006172737s for pod "etcd-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.134286 1044308 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.138012 1044308 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.138036 1044308 pod_ready.go:86] duration metric: took 3.724234ms for pod "kube-apiserver-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.140268 1044308 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.143330 1044308 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.143350 1044308 pod_ready.go:86] duration metric: took 3.063093ms for pod "kube-controller-manager-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.145633 1044308 pod_ready.go:83] waiting for pod "kube-proxy-8tdw4" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.329167 1044308 pod_ready.go:94] pod "kube-proxy-8tdw4" is "Ready"
	I0510 17:52:43.329196 1044308 pod_ready.go:86] duration metric: took 183.5398ms for pod "kube-proxy-8tdw4" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.529673 1044308 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.929860 1044308 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.929890 1044308 pod_ready.go:86] duration metric: took 400.187942ms for pod "kube-scheduler-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.929904 1044308 pod_ready.go:40] duration metric: took 1m22.819056587s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 17:52:43.974390 1044308 start.go:607] kubectl: 1.33.0, cluster: 1.20.0 (minor skew: 13)
	I0510 17:52:43.975971 1044308 out.go:201] 
	W0510 17:52:43.977399 1044308 out.go:270] ! /usr/local/bin/kubectl is version 1.33.0, which may have incompatibilities with Kubernetes 1.20.0.
	I0510 17:52:43.978880 1044308 out.go:177]   - Want kubectl v1.20.0? Try 'minikube kubectl -- get pods -A'
	I0510 17:52:43.980215 1044308 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-697935" cluster and "default" namespace by default
	I0510 17:52:40.142629 1062960 cli_runner.go:164] Run: docker start newest-cni-173135
	I0510 17:52:40.387277 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:40.406155 1062960 kic.go:430] container "newest-cni-173135" state is running.
	I0510 17:52:40.406603 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:40.425434 1062960 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/config.json ...
	I0510 17:52:40.425733 1062960 machine.go:93] provisionDockerMachine start ...
	I0510 17:52:40.425813 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:40.446701 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:40.446942 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:40.446954 1062960 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 17:52:40.447629 1062960 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38662->127.0.0.1:33504: read: connection reset by peer
	I0510 17:52:43.567334 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-173135
	
	I0510 17:52:43.567369 1062960 ubuntu.go:169] provisioning hostname "newest-cni-173135"
	I0510 17:52:43.567474 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:43.585810 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:43.586092 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:43.586114 1062960 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-173135 && echo "newest-cni-173135" | sudo tee /etc/hostname
	I0510 17:52:43.720075 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-173135
	
	I0510 17:52:43.720180 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:43.738458 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:43.738683 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:43.738700 1062960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-173135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-173135/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-173135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 17:52:43.860357 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 17:52:43.860392 1062960 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20720-722920/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-722920/.minikube}
	I0510 17:52:43.860425 1062960 ubuntu.go:177] setting up certificates
	I0510 17:52:43.860438 1062960 provision.go:84] configureAuth start
	I0510 17:52:43.860501 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:43.878837 1062960 provision.go:143] copyHostCerts
	I0510 17:52:43.878913 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem, removing ...
	I0510 17:52:43.878934 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem
	I0510 17:52:43.879010 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem (1078 bytes)
	I0510 17:52:43.879140 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem, removing ...
	I0510 17:52:43.879154 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem
	I0510 17:52:43.879187 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem (1123 bytes)
	I0510 17:52:43.879281 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem, removing ...
	I0510 17:52:43.879293 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem
	I0510 17:52:43.879328 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem (1675 bytes)
	I0510 17:52:43.879447 1062960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem org=jenkins.newest-cni-173135 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-173135]
	I0510 17:52:44.399990 1062960 provision.go:177] copyRemoteCerts
	I0510 17:52:44.400060 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 17:52:44.400097 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.417363 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:44.509498 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 17:52:44.533816 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0510 17:52:44.556664 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 17:52:44.579844 1062960 provision.go:87] duration metric: took 719.387116ms to configureAuth
	I0510 17:52:44.579874 1062960 ubuntu.go:193] setting minikube options for container-runtime
	I0510 17:52:44.580082 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:44.580225 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.597779 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:44.597997 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:44.598015 1062960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 17:52:44.861571 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 17:52:44.861603 1062960 machine.go:96] duration metric: took 4.435849898s to provisionDockerMachine
	I0510 17:52:44.861615 1062960 start.go:293] postStartSetup for "newest-cni-173135" (driver="docker")
	I0510 17:52:44.861633 1062960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 17:52:44.861696 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 17:52:44.861741 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.880393 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:44.968863 1062960 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 17:52:44.972444 1062960 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0510 17:52:44.972471 1062960 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0510 17:52:44.972479 1062960 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0510 17:52:44.972486 1062960 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0510 17:52:44.972499 1062960 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/addons for local assets ...
	I0510 17:52:44.972551 1062960 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/files for local assets ...
	I0510 17:52:44.972632 1062960 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem -> 7298152.pem in /etc/ssl/certs
	I0510 17:52:44.972715 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 17:52:44.981250 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem --> /etc/ssl/certs/7298152.pem (1708 bytes)
	I0510 17:52:45.004513 1062960 start.go:296] duration metric: took 142.88043ms for postStartSetup
	I0510 17:52:45.004636 1062960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 17:52:45.004699 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.022563 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.108643 1062960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0510 17:52:45.113165 1062960 fix.go:56] duration metric: took 4.992266927s for fixHost
	I0510 17:52:45.113190 1062960 start.go:83] releasing machines lock for "newest-cni-173135", held for 4.992317581s
	I0510 17:52:45.113270 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:45.130656 1062960 ssh_runner.go:195] Run: cat /version.json
	I0510 17:52:45.130728 1062960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 17:52:45.130785 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.130732 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.149250 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.153557 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.235894 1062960 ssh_runner.go:195] Run: systemctl --version
	I0510 17:52:45.328928 1062960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 17:52:45.467882 1062960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0510 17:52:45.472485 1062960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 17:52:45.480914 1062960 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0510 17:52:45.480989 1062960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 17:52:45.489392 1062960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0510 17:52:45.489423 1062960 start.go:495] detecting cgroup driver to use...
	I0510 17:52:45.489464 1062960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0510 17:52:45.489535 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 17:52:45.501274 1062960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 17:52:45.512452 1062960 docker.go:225] disabling cri-docker service (if available) ...
	I0510 17:52:45.512528 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 17:52:45.524828 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 17:52:45.535636 1062960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 17:52:45.618303 1062960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 17:52:45.695586 1062960 docker.go:241] disabling docker service ...
	I0510 17:52:45.695664 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 17:52:45.707968 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 17:52:45.719029 1062960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 17:52:45.800197 1062960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 17:52:45.887455 1062960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 17:52:45.898860 1062960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 17:52:45.914760 1062960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 17:52:45.914818 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.924202 1062960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 17:52:45.924260 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.933839 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.944911 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.954202 1062960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 17:52:45.962950 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.972583 1062960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.981599 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.991016 1062960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 17:52:45.999017 1062960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 17:52:46.007316 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:46.090516 1062960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 17:52:46.208208 1062960 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 17:52:46.208290 1062960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 17:52:46.212169 1062960 start.go:563] Will wait 60s for crictl version
	I0510 17:52:46.212233 1062960 ssh_runner.go:195] Run: which crictl
	I0510 17:52:46.215714 1062960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 17:52:46.250179 1062960 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0510 17:52:46.250256 1062960 ssh_runner.go:195] Run: crio --version
	I0510 17:52:46.286288 1062960 ssh_runner.go:195] Run: crio --version
	I0510 17:52:46.324763 1062960 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.24.6 ...
	I0510 17:52:46.326001 1062960 cli_runner.go:164] Run: docker network inspect newest-cni-173135 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0510 17:52:46.342321 1062960 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0510 17:52:46.346220 1062960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:52:46.358987 1062960 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0510 17:52:46.360438 1062960 kubeadm.go:875] updating cluster {Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 17:52:46.360585 1062960 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 17:52:46.360654 1062960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:52:46.402300 1062960 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 17:52:46.402322 1062960 crio.go:433] Images already preloaded, skipping extraction
	I0510 17:52:46.402371 1062960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:52:46.438279 1062960 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 17:52:46.438310 1062960 cache_images.go:84] Images are preloaded, skipping loading
	I0510 17:52:46.438321 1062960 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.33.0 crio true true} ...
	I0510 17:52:46.438480 1062960 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-173135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 17:52:46.438582 1062960 ssh_runner.go:195] Run: crio config
	I0510 17:52:46.483257 1062960 cni.go:84] Creating CNI manager for ""
	I0510 17:52:46.483281 1062960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 17:52:46.483292 1062960 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0510 17:52:46.483315 1062960 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-173135 NodeName:newest-cni-173135 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 17:52:46.483479 1062960 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-173135"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 17:52:46.483553 1062960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 17:52:46.492414 1062960 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 17:52:46.492500 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 17:52:46.501119 1062960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0510 17:52:46.518140 1062960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 17:52:46.535112 1062960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0510 17:52:46.551871 1062960 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0510 17:52:46.555171 1062960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:52:46.565729 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:46.652845 1062960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:52:46.666063 1062960 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135 for IP: 192.168.94.2
	I0510 17:52:46.666087 1062960 certs.go:194] generating shared ca certs ...
	I0510 17:52:46.666108 1062960 certs.go:226] acquiring lock for ca certs: {Name:mk27922925b9822e089551ad68cc2984cd622bc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:46.666267 1062960 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key
	I0510 17:52:46.666346 1062960 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key
	I0510 17:52:46.666367 1062960 certs.go:256] generating profile certs ...
	I0510 17:52:46.666488 1062960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/client.key
	I0510 17:52:46.666575 1062960 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.key.eac5560e
	I0510 17:52:46.666638 1062960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.key
	I0510 17:52:46.666788 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815.pem (1338 bytes)
	W0510 17:52:46.666836 1062960 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815_empty.pem, impossibly tiny 0 bytes
	I0510 17:52:46.666855 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 17:52:46.666891 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem (1078 bytes)
	I0510 17:52:46.666924 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem (1123 bytes)
	I0510 17:52:46.666954 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem (1675 bytes)
	I0510 17:52:46.667014 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem (1708 bytes)
	I0510 17:52:46.667736 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 17:52:46.694046 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0510 17:52:46.720567 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 17:52:46.750803 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0510 17:52:46.783126 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0510 17:52:46.861172 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 17:52:46.886437 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 17:52:46.909743 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 17:52:46.932746 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem --> /usr/share/ca-certificates/7298152.pem (1708 bytes)
	I0510 17:52:46.955864 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 17:52:46.978875 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815.pem --> /usr/share/ca-certificates/729815.pem (1338 bytes)
	I0510 17:52:47.001846 1062960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 17:52:47.018936 1062960 ssh_runner.go:195] Run: openssl version
	I0510 17:52:47.024207 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 17:52:47.033345 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.036756 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 16:54 /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.036814 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.043306 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 17:52:47.051810 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/729815.pem && ln -fs /usr/share/ca-certificates/729815.pem /etc/ssl/certs/729815.pem"
	I0510 17:52:47.060972 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.064315 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 17:06 /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.064361 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.070986 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/729815.pem /etc/ssl/certs/51391683.0"
	I0510 17:52:47.079952 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7298152.pem && ln -fs /usr/share/ca-certificates/7298152.pem /etc/ssl/certs/7298152.pem"
	I0510 17:52:47.089676 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.093441 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 17:06 /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.093504 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.100198 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7298152.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 17:52:47.108827 1062960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 17:52:47.112497 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 17:52:47.119081 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 17:52:47.125525 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 17:52:47.131948 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 17:52:47.138247 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 17:52:47.145052 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 17:52:47.152189 1062960 kubeadm.go:392] StartCluster: {Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:47.152299 1062960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 17:52:47.152356 1062960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 17:52:47.190954 1062960 cri.go:89] found id: ""
	I0510 17:52:47.191057 1062960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 17:52:47.200662 1062960 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0510 17:52:47.200683 1062960 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0510 17:52:47.200729 1062960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0510 17:52:47.210371 1062960 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0510 17:52:47.211583 1062960 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-173135" does not appear in /home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:47.212205 1062960 kubeconfig.go:62] /home/jenkins/minikube-integration/20720-722920/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-173135" cluster setting kubeconfig missing "newest-cni-173135" context setting]
	I0510 17:52:47.213167 1062960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/kubeconfig: {Name:mk9fb87a04495b85d7d2d831cf7e181b64e065fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.215451 1062960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0510 17:52:47.225765 1062960 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0510 17:52:47.225809 1062960 kubeadm.go:593] duration metric: took 25.118512ms to restartPrimaryControlPlane
	I0510 17:52:47.225823 1062960 kubeadm.go:394] duration metric: took 73.645898ms to StartCluster
	I0510 17:52:47.225844 1062960 settings.go:142] acquiring lock: {Name:mkb5ef074e3901ac961cf1a29314fa6c725c1890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.225925 1062960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:47.227600 1062960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/kubeconfig: {Name:mk9fb87a04495b85d7d2d831cf7e181b64e065fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.227929 1062960 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 17:52:47.228146 1062960 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0510 17:52:47.228262 1062960 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-173135"
	I0510 17:52:47.228286 1062960 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-173135"
	W0510 17:52:47.228300 1062960 addons.go:247] addon storage-provisioner should already be in state true
	I0510 17:52:47.228322 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:47.228340 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.228374 1062960 addons.go:69] Setting default-storageclass=true in profile "newest-cni-173135"
	I0510 17:52:47.228389 1062960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-173135"
	I0510 17:52:47.228696 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.228794 1062960 addons.go:69] Setting metrics-server=true in profile "newest-cni-173135"
	I0510 17:52:47.228819 1062960 addons.go:238] Setting addon metrics-server=true in "newest-cni-173135"
	W0510 17:52:47.228830 1062960 addons.go:247] addon metrics-server should already be in state true
	I0510 17:52:47.228871 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.228905 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.229098 1062960 addons.go:69] Setting dashboard=true in profile "newest-cni-173135"
	I0510 17:52:47.229122 1062960 addons.go:238] Setting addon dashboard=true in "newest-cni-173135"
	W0510 17:52:47.229131 1062960 addons.go:247] addon dashboard should already be in state true
	I0510 17:52:47.229160 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.229350 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.229636 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.231952 1062960 out.go:177] * Verifying Kubernetes components...
	I0510 17:52:47.233708 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:47.257836 1062960 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0510 17:52:47.259786 1062960 addons.go:238] Setting addon default-storageclass=true in "newest-cni-173135"
	W0510 17:52:47.259808 1062960 addons.go:247] addon default-storageclass should already be in state true
	I0510 17:52:47.259842 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.260502 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0510 17:52:47.260520 1062960 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0510 17:52:47.260587 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.260894 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.269485 1062960 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0510 17:52:47.270561 1062960 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0510 17:52:47.271826 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0510 17:52:47.271848 1062960 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0510 17:52:47.271913 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.273848 1062960 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 17:52:47.275490 1062960 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:52:47.275521 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0510 17:52:47.275721 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.287652 1062960 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:47.287676 1062960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0510 17:52:47.287737 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.300295 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.308088 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.314958 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.317183 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.570630 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:47.644300 1062960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:52:47.648111 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0510 17:52:47.648144 1062960 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0510 17:52:47.745020 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0510 17:52:47.745054 1062960 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0510 17:52:47.746206 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:52:47.753235 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0510 17:52:47.753267 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0510 17:52:47.852275 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0510 17:52:47.852309 1062960 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0510 17:52:47.854261 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0510 17:52:47.854291 1062960 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0510 17:52:47.957529 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0510 17:52:47.957561 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0510 17:52:47.962427 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:52:47.962453 1062960 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0510 17:52:47.967141 1062960 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0510 17:52:47.967185 1062960 retry.go:31] will retry after 329.411117ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0510 17:52:47.967271 1062960 api_server.go:52] waiting for apiserver process to appear ...
	I0510 17:52:47.967381 1062960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 17:52:48.055318 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0510 17:52:48.055400 1062960 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0510 17:52:48.060787 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:52:48.149914 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0510 17:52:48.149947 1062960 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0510 17:52:48.175035 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0510 17:52:48.175070 1062960 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0510 17:52:48.263718 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0510 17:52:48.263750 1062960 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0510 17:52:48.282195 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 17:52:48.282227 1062960 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0510 17:52:48.297636 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:48.359369 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 17:52:52.345196 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.598944537s)
	I0510 17:52:52.345534 1062960 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.378119806s)
	I0510 17:52:52.345610 1062960 api_server.go:72] duration metric: took 5.117639828s to wait for apiserver process to appear ...
	I0510 17:52:52.345622 1062960 api_server.go:88] waiting for apiserver healthz status ...
	I0510 17:52:52.345683 1062960 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0510 17:52:52.350659 1062960 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0510 17:52:52.350693 1062960 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0510 17:52:52.462305 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.401465129s)
	I0510 17:52:52.462425 1062960 addons.go:479] Verifying addon metrics-server=true in "newest-cni-173135"
	I0510 17:52:52.462366 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.164694895s)
	I0510 17:52:52.558877 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.199364581s)
	I0510 17:52:52.560719 1062960 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-173135 addons enable metrics-server
	
	I0510 17:52:52.562364 1062960 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0510 17:52:52.563698 1062960 addons.go:514] duration metric: took 5.33556927s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0510 17:52:52.846151 1062960 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0510 17:52:52.850590 1062960 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0510 17:52:52.851935 1062960 api_server.go:141] control plane version: v1.33.0
	I0510 17:52:52.851968 1062960 api_server.go:131] duration metric: took 506.335848ms to wait for apiserver health ...
	I0510 17:52:52.851979 1062960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 17:52:52.855964 1062960 system_pods.go:59] 9 kube-system pods found
	I0510 17:52:52.856013 1062960 system_pods.go:61] "coredns-674b8bbfcf-l2m27" [11b63e72-35af-4a70-a7d3-b11e18104e2e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856039 1062960 system_pods.go:61] "etcd-newest-cni-173135" [60c35044-778d-45d4-8d96-e58efbd9b54b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 17:52:52.856062 1062960 system_pods.go:61] "kindnet-5nzlt" [9158a53c-5cd1-426c-a255-37618e292899] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0510 17:52:52.856073 1062960 system_pods.go:61] "kube-apiserver-newest-cni-173135" [790eeefa-f593-4148-b5f3-43bf9807166f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 17:52:52.856085 1062960 system_pods.go:61] "kube-controller-manager-newest-cni-173135" [75bdb232-66d8-442a-8566-34a3d4674876] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 17:52:52.856096 1062960 system_pods.go:61] "kube-proxy-v2tt7" [e502d755-4ecb-4567-9259-547f7c063830] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0510 17:52:52.856108 1062960 system_pods.go:61] "kube-scheduler-newest-cni-173135" [8bfc0953-197d-4185-b2e7-6e1a2d97a8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 17:52:52.856117 1062960 system_pods.go:61] "metrics-server-f79f97bbb-z4g7z" [a6bcfd5e-6f32-43ef-a6e7-336c90faf9ff] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856125 1062960 system_pods.go:61] "storage-provisioner" [effda141-cd8d-4f87-97a1-9166c59e1de0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856132 1062960 system_pods.go:74] duration metric: took 4.146105ms to wait for pod list to return data ...
	I0510 17:52:52.856143 1062960 default_sa.go:34] waiting for default service account to be created ...
	I0510 17:52:52.858633 1062960 default_sa.go:45] found service account: "default"
	I0510 17:52:52.858658 1062960 default_sa.go:55] duration metric: took 2.507165ms for default service account to be created ...
	I0510 17:52:52.858670 1062960 kubeadm.go:578] duration metric: took 5.630701473s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0510 17:52:52.858701 1062960 node_conditions.go:102] verifying NodePressure condition ...
	I0510 17:52:52.861375 1062960 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0510 17:52:52.861398 1062960 node_conditions.go:123] node cpu capacity is 8
	I0510 17:52:52.861411 1062960 node_conditions.go:105] duration metric: took 2.704535ms to run NodePressure ...
	I0510 17:52:52.861422 1062960 start.go:241] waiting for startup goroutines ...
	I0510 17:52:52.861431 1062960 start.go:246] waiting for cluster config update ...
	I0510 17:52:52.861444 1062960 start.go:255] writing updated cluster config ...
	I0510 17:52:52.861692 1062960 ssh_runner.go:195] Run: rm -f paused
	I0510 17:52:52.918445 1062960 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 17:52:52.920711 1062960 out.go:177] * Done! kubectl is now configured to use "newest-cni-173135" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 10 18:08:40 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:08:40.884068762Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=c07d0934-f094-4f64-b38f-655c7b446182 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:08:43 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:08:43.885374602Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=b4623d61-cbcd-44b1-ab3c-41d48fc2ae71 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:08:43 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:08:43.885678851Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=b4623d61-cbcd-44b1-ab3c-41d48fc2ae71 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:08:55 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:08:55.887904921Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=73fc8ad1-a9dd-43e8-8a75-3dc1b536715e name=/runtime.v1.ImageService/ImageStatus
	May 10 18:08:55 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:08:55.888220238Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=73fc8ad1-a9dd-43e8-8a75-3dc1b536715e name=/runtime.v1.ImageService/ImageStatus
	May 10 18:08:57 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:08:57.883875361Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=6cae5cce-5870-410c-9163-3fda2eac4e14 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:08:57 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:08:57.884101092Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=6cae5cce-5870-410c-9163-3fda2eac4e14 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:07 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:07.883910666Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=22fca8e3-1d47-4b86-8278-7682cd9881e4 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:07 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:07.884184848Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=22fca8e3-1d47-4b86-8278-7682cd9881e4 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:10 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:10.884067844Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1f9643f2-99d5-4194-9321-bbb010459e7e name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:10 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:10.884329818Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1f9643f2-99d5-4194-9321-bbb010459e7e name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:21 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:21.884624604Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=c3178b8b-673b-4597-b130-f6275d4fd9c8 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:21 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:21.884671513Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=6680ad82-c607-4f1e-bc74-00ebb96695f3 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:21 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:21.884905169Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=c3178b8b-673b-4597-b130-f6275d4fd9c8 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:21 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:21.885016312Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=6680ad82-c607-4f1e-bc74-00ebb96695f3 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:36 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:36.884212398Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=2daf5642-4667-4cb0-983e-66915564c1b4 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:36 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:36.884266979Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=d44e1bea-4f6d-4027-a723-a6b614a01fd8 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:36 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:36.884529972Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=d44e1bea-4f6d-4027-a723-a6b614a01fd8 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:36 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:36.884575636Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=2daf5642-4667-4cb0-983e-66915564c1b4 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:47 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:47.885306715Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=54dc3dbe-466b-4bc1-84ce-6ba094b93e94 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:47 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:47.885542085Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=54dc3dbe-466b-4bc1-84ce-6ba094b93e94 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:51 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:51.884005259Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ff3ae2f7-2ddf-4ae5-a795-be70c0e08809 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:51 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:51.884348419Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=ff3ae2f7-2ddf-4ae5-a795-be70c0e08809 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:59 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:59.884017100Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=5d5bb14a-540a-43b8-8324-20598c783a32 name=/runtime.v1.ImageService/ImageStatus
	May 10 18:09:59 default-k8s-diff-port-676255 crio[670]: time="2025-05-10 18:09:59.884310695Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=5d5bb14a-540a-43b8-8324-20598c783a32 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	ad6c81b06f063       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   8                   6c46559b9197b       dashboard-metrics-scraper-86c6bf9756-zj28d
	d78df9c428b8f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner         2                   69fec956f4437       storage-provisioner
	c69d9fdd9ca0e       1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b   18 minutes ago      Running             coredns                     1                   a29be636294cc       coredns-674b8bbfcf-lv75k
	ca345a87b4c84       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f   18 minutes ago      Running             kindnet-cni                 1                   d3c4cfbf8e617       kindnet-g27zc
	84cfd522b5eb4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner         1                   69fec956f4437       storage-provisioner
	85b8082e0d4eb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago      Running             busybox                     1                   ea407ebdf019d       busybox
	12bbd396a9677       f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68   18 minutes ago      Running             kube-proxy                  1                   810de448710f2       kube-proxy-hfrsv
	c5614524272b1       8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4   18 minutes ago      Running             kube-scheduler              1                   d8a015ee04be7       kube-scheduler-default-k8s-diff-port-676255
	8b6a9b2c8306c       1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02   18 minutes ago      Running             kube-controller-manager     1                   d6a819fee72e3       kube-controller-manager-default-k8s-diff-port-676255
	4fc9c98394541       6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4   18 minutes ago      Running             kube-apiserver              1                   e43cd8380104e       kube-apiserver-default-k8s-diff-port-676255
	23663520f51e3       499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1   18 minutes ago      Running             etcd                        1                   f4a8c26370e39       etcd-default-k8s-diff-port-676255
	
	
	==> coredns [c69d9fdd9ca0e75c02f7f9679695858d9d3833cad45c36ca2b31e62a02e4d695] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.0
	linux/amd64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:39968 - 58670 "HINFO IN 1433296294397684971.295867134072264484. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.052083969s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-676255
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-676255
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=default-k8s-diff-port-676255
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T17_50_24_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 17:50:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-676255
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 18:09:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 18:07:06 +0000   Sat, 10 May 2025 17:50:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 18:07:06 +0000   Sat, 10 May 2025 17:50:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 18:07:06 +0000   Sat, 10 May 2025 17:50:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 18:07:06 +0000   Sat, 10 May 2025 17:50:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-676255
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0de183547024632ba286ea91266c983
	  System UUID:                4910076d-027f-4fc1-91a0-466c135c9938
	  Boot ID:                    cf43504f-fb83-4d4b-9ff6-27d975437043
	  Kernel Version:             5.15.0-1081-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.33.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-674b8bbfcf-lv75k                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-default-k8s-diff-port-676255                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-g27zc                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-676255             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-676255    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-hfrsv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-676255             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-f79f97bbb-xxd6x                          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-zj28d              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-tq4tr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-676255 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 19m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-676255 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-676255 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 19m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           19m                node-controller  Node default-k8s-diff-port-676255 event: Registered Node default-k8s-diff-port-676255 in Controller
	  Normal   NodeReady                19m                kubelet          Node default-k8s-diff-port-676255 status is now: NodeReady
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-676255 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-676255 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-676255 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           18m                node-controller  Node default-k8s-diff-port-676255 event: Registered Node default-k8s-diff-port-676255 in Controller
	
	
	==> dmesg <==
	[  +1.019813] net_ratelimit: 3 callbacks suppressed
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000003] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000002] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +4.095573] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000007] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000001] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000002] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +3.075626] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000001] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +0.000001] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000002] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +1.019906] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000006] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	
	
	==> etcd [23663520f51e3c0d2c766772ab95952e2566e29f8c574114752f6ec472da9202] <==
	{"level":"info","ts":"2025-05-10T17:51:14.988872Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-05-10T17:51:14.988998Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T17:51:14.989058Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-05-10T17:51:16.751992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-05-10T17:51:16.752165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-05-10T17:51:16.752246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-05-10T17:51:16.752305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.752385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.752441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.752480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-05-10T17:51:16.773304Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-676255 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-05-10T17:51:16.773452Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:51:16.773474Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-05-10T17:51:16.774750Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:51:16.773722Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-05-10T17:51:16.775543Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-05-10T17:51:16.775756Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-05-10T17:51:16.776030Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-05-10T17:51:16.776726Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-05-10T18:01:16.803171Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":993}
	{"level":"info","ts":"2025-05-10T18:01:16.809576Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":993,"took":"6.016388ms","hash":3226328856,"current-db-size-bytes":3411968,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":3411968,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2025-05-10T18:01:16.809617Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3226328856,"revision":993,"compact-revision":-1}
	{"level":"info","ts":"2025-05-10T18:06:16.808173Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1271}
	{"level":"info","ts":"2025-05-10T18:06:16.811263Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":1271,"took":"2.786224ms","hash":4273476371,"current-db-size-bytes":3411968,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1986560,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-05-10T18:06:16.811302Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":4273476371,"revision":1271,"compact-revision":993}
	
	
	==> kernel <==
	 18:10:03 up  3:52,  0 users,  load average: 0.62, 0.62, 2.09
	Linux default-k8s-diff-port-676255 5.15.0-1081-gcp #90~20.04.1-Ubuntu SMP Fri Apr 4 18:55:17 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [ca345a87b4c84411cddfb20cabc81f58266e9f50c474f8cbfe49db03041191b8] <==
	I0510 18:08:01.348357       1 main.go:301] handling current node
	I0510 18:08:11.348689       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:08:11.348798       1 main.go:301] handling current node
	I0510 18:08:21.348505       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:08:21.348540       1 main.go:301] handling current node
	I0510 18:08:31.349078       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:08:31.349119       1 main.go:301] handling current node
	I0510 18:08:41.350280       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:08:41.350327       1 main.go:301] handling current node
	I0510 18:08:51.348550       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:08:51.348586       1 main.go:301] handling current node
	I0510 18:09:01.348565       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:09:01.348603       1 main.go:301] handling current node
	I0510 18:09:11.357668       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:09:11.357704       1 main.go:301] handling current node
	I0510 18:09:21.349023       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:09:21.349062       1 main.go:301] handling current node
	I0510 18:09:31.355512       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:09:31.355546       1 main.go:301] handling current node
	I0510 18:09:41.348588       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:09:41.348622       1 main.go:301] handling current node
	I0510 18:09:51.349767       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:09:51.349820       1 main.go:301] handling current node
	I0510 18:10:01.350092       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0510 18:10:01.350153       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4fc9c9839454111ce0acea94e13826a65df3ab1fd0d76bc66399621e014e91bd] <==
	W0510 18:06:19.785635       1 handler_proxy.go:99] no RequestInfo found in the context
	E0510 18:06:19.785812       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0510 18:06:19.786853       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 18:06:19.786866       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0510 18:07:19.787512       1 handler_proxy.go:99] no RequestInfo found in the context
	W0510 18:07:19.787525       1 handler_proxy.go:99] no RequestInfo found in the context
	E0510 18:07:19.787627       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0510 18:07:19.787637       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0510 18:07:19.788770       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 18:07:19.788776       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0510 18:09:19.789136       1 handler_proxy.go:99] no RequestInfo found in the context
	W0510 18:09:19.789138       1 handler_proxy.go:99] no RequestInfo found in the context
	E0510 18:09:19.789217       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0510 18:09:19.789257       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0510 18:09:19.790336       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 18:09:19.790358       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8b6a9b2c8306c1d2fb0fc2a82a75f8469160a819fbad7fb3d2e438380d74986a] <==
	I0510 18:03:53.840656       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:04:23.399837       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:04:23.846954       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:04:53.404660       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:04:53.853942       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:05:23.410101       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:05:23.861090       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:05:53.415701       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:05:53.867570       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:06:23.421214       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:06:23.874185       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:06:53.426968       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:06:53.881195       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:07:23.432874       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:07:23.889267       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:07:53.439254       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:07:53.896601       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:08:23.444538       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:08:23.903983       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:08:53.450880       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:08:53.911137       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:09:23.456960       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:09:23.918012       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0510 18:09:53.461874       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0510 18:09:53.924918       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [12bbd396a96776e2aa4a9fdafd487d693b6f6ae41b20eb30bb7af563e6f9da7c] <==
	I0510 17:51:20.673329       1 server_linux.go:63] "Using iptables proxy"
	I0510 17:51:21.008748       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.85.2"]
	E0510 17:51:21.008832       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0510 17:51:21.112762       1 server.go:254] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0510 17:51:21.112825       1 server_linux.go:145] "Using iptables Proxier"
	I0510 17:51:21.160961       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0510 17:51:21.180186       1 server.go:516] "Version info" version="v1.33.0"
	I0510 17:51:21.180237       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:51:21.181926       1 config.go:199] "Starting service config controller"
	I0510 17:51:21.182017       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0510 17:51:21.184449       1 config.go:440] "Starting serviceCIDR config controller"
	I0510 17:51:21.187346       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0510 17:51:21.187405       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0510 17:51:21.186804       1 config.go:329] "Starting node config controller"
	I0510 17:51:21.188586       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0510 17:51:21.188656       1 config.go:105] "Starting endpoint slice config controller"
	I0510 17:51:21.188684       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0510 17:51:21.283006       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0510 17:51:21.289399       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0510 17:51:21.290199       1 shared_informer.go:357] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [c5614524272b1c26f34452e570f1198fae85debe604d4a3fa17071029baaa020] <==
	I0510 17:51:16.115763       1 serving.go:386] Generated self-signed cert in-memory
	W0510 17:51:18.759528       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0510 17:51:18.759653       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0510 17:51:18.759696       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0510 17:51:18.759735       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0510 17:51:19.115189       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.0"
	I0510 17:51:19.115237       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0510 17:51:19.167124       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0510 17:51:19.167523       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0510 17:51:19.167598       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:51:19.189540       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0510 17:51:19.299579       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	May 10 18:09:18 default-k8s-diff-port-676255 kubelet[810]: I0510 18:09:18.883945     810 scope.go:117] "RemoveContainer" containerID="ad6c81b06f063dc5da329979b4db40206385572d79651125b5795f580735e390"
	May 10 18:09:18 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:18.884155     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-zj28d_kubernetes-dashboard(41905a30-bc1c-4bc3-aec5-605250c6efb1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-zj28d" podUID="41905a30-bc1c-4bc3-aec5-605250c6efb1"
	May 10 18:09:21 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:21.885171     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-tq4tr" podUID="62e44ee1-f320-4a22-bf54-04c5efdd417e"
	May 10 18:09:21 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:21.885646     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-xxd6x" podUID="531862bb-0aa3-4428-acfb-19097f9436c9"
	May 10 18:09:24 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:24.013758     810 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900564013504144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:24 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:24.013806     810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900564013504144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:31 default-k8s-diff-port-676255 kubelet[810]: I0510 18:09:31.883307     810 scope.go:117] "RemoveContainer" containerID="ad6c81b06f063dc5da329979b4db40206385572d79651125b5795f580735e390"
	May 10 18:09:31 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:31.883610     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-zj28d_kubernetes-dashboard(41905a30-bc1c-4bc3-aec5-605250c6efb1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-zj28d" podUID="41905a30-bc1c-4bc3-aec5-605250c6efb1"
	May 10 18:09:34 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:34.014949     810 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900574014740323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:34 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:34.014996     810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900574014740323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:36 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:36.884829     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-xxd6x" podUID="531862bb-0aa3-4428-acfb-19097f9436c9"
	May 10 18:09:36 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:36.884829     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-tq4tr" podUID="62e44ee1-f320-4a22-bf54-04c5efdd417e"
	May 10 18:09:42 default-k8s-diff-port-676255 kubelet[810]: I0510 18:09:42.883219     810 scope.go:117] "RemoveContainer" containerID="ad6c81b06f063dc5da329979b4db40206385572d79651125b5795f580735e390"
	May 10 18:09:42 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:42.883540     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-zj28d_kubernetes-dashboard(41905a30-bc1c-4bc3-aec5-605250c6efb1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-zj28d" podUID="41905a30-bc1c-4bc3-aec5-605250c6efb1"
	May 10 18:09:44 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:44.016201     810 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900584015980620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:44 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:44.016246     810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900584015980620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:47 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:47.885788     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-xxd6x" podUID="531862bb-0aa3-4428-acfb-19097f9436c9"
	May 10 18:09:51 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:51.884618     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-tq4tr" podUID="62e44ee1-f320-4a22-bf54-04c5efdd417e"
	May 10 18:09:54 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:54.017416     810 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900594017209146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:54 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:54.017463     810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900594017209146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:09:57 default-k8s-diff-port-676255 kubelet[810]: I0510 18:09:57.883274     810 scope.go:117] "RemoveContainer" containerID="ad6c81b06f063dc5da329979b4db40206385572d79651125b5795f580735e390"
	May 10 18:09:57 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:57.883582     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-zj28d_kubernetes-dashboard(41905a30-bc1c-4bc3-aec5-605250c6efb1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-zj28d" podUID="41905a30-bc1c-4bc3-aec5-605250c6efb1"
	May 10 18:09:59 default-k8s-diff-port-676255 kubelet[810]: E0510 18:09:59.884651     810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-xxd6x" podUID="531862bb-0aa3-4428-acfb-19097f9436c9"
	May 10 18:10:04 default-k8s-diff-port-676255 kubelet[810]: E0510 18:10:04.018814     810 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900604018530959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	May 10 18:10:04 default-k8s-diff-port-676255 kubelet[810]: E0510 18:10:04.018854     810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1746900604018530959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176179,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [84cfd522b5eb4f451a059a4b94f09aa492445664302f769cd7550687083f819e] <==
	I0510 17:51:20.560456       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0510 17:51:50.563480       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d78df9c428b8fba424429e774d336b320abe9ad03549c02406d1429438773830] <==
	W0510 18:09:38.620616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:40.624030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:40.627787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:42.630233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:42.634206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:44.637512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:44.643381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:46.646456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:46.650417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:48.653597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:48.657512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:50.660525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:50.664615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:52.668036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:52.673415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:54.676347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:54.680214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:56.683092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:56.687348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:58.691207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:09:58.696912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:10:00.700071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:10:00.704555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:10:02.707873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0510 18:10:02.722423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-676255 -n default-k8s-diff-port-676255
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-676255 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-xxd6x kubernetes-dashboard-7779f9b69b-tq4tr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-676255 describe pod metrics-server-f79f97bbb-xxd6x kubernetes-dashboard-7779f9b69b-tq4tr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-676255 describe pod metrics-server-f79f97bbb-xxd6x kubernetes-dashboard-7779f9b69b-tq4tr: exit status 1 (57.239709ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-xxd6x" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-tq4tr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-676255 describe pod metrics-server-f79f97bbb-xxd6x kubernetes-dashboard-7779f9b69b-tq4tr: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-6bj6d" [8dfa2561-0fd4-4df5-93e1-f807fe41266a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0510 18:02:03.265552  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:02:27.671792  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:02:37.678967  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:02:44.668333  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:03:57.104608  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:04:05.059585  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:04:24.845050  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:04:25.001971  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:05:36.299444  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:06:21.808365  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:07:03.265476  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:07:27.671661  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:07:37.679674  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:07:44.668464  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:07:44.871919  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:08:50.735374  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:08:57.105400  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:09:00.744460  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:09:05.059591  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:09:07.732145  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:09:24.845634  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:09:25.002181  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-697935 -n old-k8s-version-697935
start_stop_delete_test.go:285: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-05-10 18:10:47.129366943 +0000 UTC m=+4586.811154078
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-697935 describe po kubernetes-dashboard-cd95d586-6bj6d -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context old-k8s-version-697935 describe po kubernetes-dashboard-cd95d586-6bj6d -n kubernetes-dashboard:
Name:             kubernetes-dashboard-cd95d586-6bj6d
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-697935/192.168.103.2
Start Time:       Sat, 10 May 2025 17:51:32 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=cd95d586
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-cd95d586
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-lrxrc (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kubernetes-dashboard-token-lrxrc:
Type:        Secret (a volume populated by a Secret)
SecretName:  kubernetes-dashboard-token-lrxrc
Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  19m                  default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-cd95d586-6bj6d to old-k8s-version-697935
Normal   Pulling    16m (x4 over 19m)    kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     15m (x3 over 18m)    kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     15m (x4 over 18m)    kubelet            Error: ErrImagePull
Warning  Failed     14m (x11 over 18m)   kubelet            Error: ImagePullBackOff
Normal   BackOff    9m2s (x26 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m46s (x3 over 17m)  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-697935 logs kubernetes-dashboard-cd95d586-6bj6d -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-697935 logs kubernetes-dashboard-cd95d586-6bj6d -n kubernetes-dashboard: exit status 1 (75.275744ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-cd95d586-6bj6d" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context old-k8s-version-697935 logs kubernetes-dashboard-cd95d586-6bj6d -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-697935 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-697935
helpers_test.go:235: (dbg) docker inspect old-k8s-version-697935:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1",
	        "Created": "2025-05-10T17:48:25.557404666Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1044519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-05-10T17:50:53.432208071Z",
	            "FinishedAt": "2025-05-10T17:50:52.531319087Z"
	        },
	        "Image": "sha256:e9e814e304601d171cd7a05fe946703c6fbd63c3e77415c5bcfe31c3cddbbe5f",
	        "ResolvConfPath": "/var/lib/docker/containers/eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1/hostname",
	        "HostsPath": "/var/lib/docker/containers/eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1/hosts",
	        "LogPath": "/var/lib/docker/containers/eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1/eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1-json.log",
	        "Name": "/old-k8s-version-697935",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-697935:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-697935",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eb68cd4666de00d44036e5c7bd833715d24f515c699b15c01a16d0cfc39ad4f1",
	                "LowerDir": "/var/lib/docker/overlay2/a8bd73192116b138eaad2fa16c9fbfd3b433aef04c9a5c29d79f5127ccfb35d9-init/diff:/var/lib/docker/overlay2/d562a19931b28d74981554e3e67ffc7804c8c483ec96f024e40ef2be1bf23f73/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8bd73192116b138eaad2fa16c9fbfd3b433aef04c9a5c29d79f5127ccfb35d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8bd73192116b138eaad2fa16c9fbfd3b433aef04c9a5c29d79f5127ccfb35d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8bd73192116b138eaad2fa16c9fbfd3b433aef04c9a5c29d79f5127ccfb35d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-697935",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-697935/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-697935",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-697935",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-697935",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3258bf027cb8b69c815869d87c662acfa78f86254269772044555e9f22043439",
	            "SandboxKey": "/var/run/docker/netns/3258bf027cb8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-697935": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:83:4d:ad:3a:94",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec25a068cacdea5d21bc1a6d5632ec61740de3d163f84e29a86d0b23f4aa28df",
	                    "EndpointID": "67aa9a0dffc46ed701fed3e6000c482d56647c162de59793697cff54360d1d2d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-697935",
	                        "eb68cd4666de"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-697935 -n old-k8s-version-697935
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-697935 logs -n 25
E0510 18:10:47.910452  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:10:48.067990  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:10:48.409869  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/default-k8s-diff-port-676255/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-697935 logs -n 25: (1.142711953s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | no-preload-058078 image list                           | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| delete  | -p no-preload-058078                                   | no-preload-058078            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| start   | -p newest-cni-173135 --memory=2200 --alsologtostderr   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-173135             | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-173135                  | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-173135 --memory=2200 --alsologtostderr   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-173135 image list                           | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| delete  | -p newest-cni-173135                                   | newest-cni-173135            | jenkins | v1.35.0 | 10 May 25 17:52 UTC | 10 May 25 17:52 UTC |
	| image   | embed-certs-256321 image list                          | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 18:10 UTC | 10 May 25 18:10 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-256321                                  | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 18:10 UTC | 10 May 25 18:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-256321                                  | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 18:10 UTC | 10 May 25 18:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-256321                                  | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 18:10 UTC | 10 May 25 18:10 UTC |
	| image   | default-k8s-diff-port-676255                           | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 18:10 UTC | 10 May 25 18:10 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 18:10 UTC | 10 May 25 18:10 UTC |
	|         | default-k8s-diff-port-676255                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 18:10 UTC | 10 May 25 18:10 UTC |
	|         | default-k8s-diff-port-676255                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-256321                                  | embed-certs-256321           | jenkins | v1.35.0 | 10 May 25 18:10 UTC | 10 May 25 18:10 UTC |
	| delete  | -p                                                     | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 18:10 UTC | 10 May 25 18:10 UTC |
	|         | default-k8s-diff-port-676255                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-676255 | jenkins | v1.35.0 | 10 May 25 18:10 UTC | 10 May 25 18:10 UTC |
	|         | default-k8s-diff-port-676255                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 17:52:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 17:52:39.942859 1062960 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:52:39.943098 1062960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:39.943129 1062960 out.go:358] Setting ErrFile to fd 2...
	I0510 17:52:39.943146 1062960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:52:39.943562 1062960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:52:39.944604 1062960 out.go:352] Setting JSON to false
	I0510 17:52:39.945997 1062960 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12907,"bootTime":1746886653,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:52:39.946130 1062960 start.go:140] virtualization: kvm guest
	I0510 17:52:39.948309 1062960 out.go:177] * [newest-cni-173135] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:52:39.949674 1062960 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:52:39.949716 1062960 notify.go:220] Checking for updates...
	I0510 17:52:39.952354 1062960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:52:39.953722 1062960 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:39.955058 1062960 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 17:52:39.956484 1062960 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:52:39.957799 1062960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:52:39.959587 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:39.960145 1062960 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:52:39.985577 1062960 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 17:52:39.985704 1062960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:52:40.035501 1062960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-05-10 17:52:40.02617924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:52:40.035611 1062960 docker.go:318] overlay module found
	I0510 17:52:40.037784 1062960 out.go:177] * Using the docker driver based on existing profile
	I0510 17:52:40.039108 1062960 start.go:304] selected driver: docker
	I0510 17:52:40.039123 1062960 start.go:908] validating driver "docker" against &{Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:40.039239 1062960 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:52:40.040135 1062960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:52:40.092965 1062960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-05-10 17:52:40.084143213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:52:40.093291 1062960 start_flags.go:994] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0510 17:52:40.093320 1062960 cni.go:84] Creating CNI manager for ""
	I0510 17:52:40.093383 1062960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 17:52:40.093421 1062960 start.go:347] cluster config:
	{Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:40.096146 1062960 out.go:177] * Starting "newest-cni-173135" primary control-plane node in "newest-cni-173135" cluster
	I0510 17:52:40.097483 1062960 cache.go:121] Beginning downloading kic base image for docker with crio
	I0510 17:52:40.098838 1062960 out.go:177] * Pulling base image v0.0.46-1746731792-20718 ...
	I0510 17:52:40.100016 1062960 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 17:52:40.100054 1062960 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
	I0510 17:52:40.100073 1062960 cache.go:56] Caching tarball of preloaded images
	I0510 17:52:40.100128 1062960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local docker daemon
	I0510 17:52:40.100157 1062960 preload.go:172] Found /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0510 17:52:40.100165 1062960 cache.go:59] Finished verifying existence of preloaded tar for v1.33.0 on crio
	I0510 17:52:40.100261 1062960 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/config.json ...
	I0510 17:52:40.120688 1062960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local docker daemon, skipping pull
	I0510 17:52:40.120714 1062960 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 exists in daemon, skipping load
	I0510 17:52:40.120734 1062960 cache.go:230] Successfully downloaded all kic artifacts
	I0510 17:52:40.120784 1062960 start.go:360] acquireMachinesLock for newest-cni-173135: {Name:mk75975d6daf4063f8ba79544d03229010ceb1f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0510 17:52:40.120860 1062960 start.go:364] duration metric: took 50.497µs to acquireMachinesLock for "newest-cni-173135"
	I0510 17:52:40.120885 1062960 start.go:96] Skipping create...Using existing machine configuration
	I0510 17:52:40.120892 1062960 fix.go:54] fixHost starting: 
	I0510 17:52:40.121107 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:40.139354 1062960 fix.go:112] recreateIfNeeded on newest-cni-173135: state=Stopped err=<nil>
	W0510 17:52:40.139386 1062960 fix.go:138] unexpected machine state, will restart: <nil>
	I0510 17:52:40.141294 1062960 out.go:177] * Restarting existing docker container for "newest-cni-173135" ...
	W0510 17:52:39.629875 1044308 pod_ready.go:104] pod "etcd-old-k8s-version-697935" is not "Ready", error: <nil>
	W0510 17:52:41.630228 1044308 pod_ready.go:104] pod "etcd-old-k8s-version-697935" is not "Ready", error: <nil>
	I0510 17:52:43.131391 1044308 pod_ready.go:94] pod "etcd-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.131443 1044308 pod_ready.go:86] duration metric: took 50.006172737s for pod "etcd-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.134286 1044308 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.138012 1044308 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.138036 1044308 pod_ready.go:86] duration metric: took 3.724234ms for pod "kube-apiserver-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.140268 1044308 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.143330 1044308 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.143350 1044308 pod_ready.go:86] duration metric: took 3.063093ms for pod "kube-controller-manager-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.145633 1044308 pod_ready.go:83] waiting for pod "kube-proxy-8tdw4" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.329167 1044308 pod_ready.go:94] pod "kube-proxy-8tdw4" is "Ready"
	I0510 17:52:43.329196 1044308 pod_ready.go:86] duration metric: took 183.5398ms for pod "kube-proxy-8tdw4" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.529673 1044308 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.929860 1044308 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-697935" is "Ready"
	I0510 17:52:43.929890 1044308 pod_ready.go:86] duration metric: took 400.187942ms for pod "kube-scheduler-old-k8s-version-697935" in "kube-system" namespace to be "Ready" or be gone ...
	I0510 17:52:43.929904 1044308 pod_ready.go:40] duration metric: took 1m22.819056587s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0510 17:52:43.974390 1044308 start.go:607] kubectl: 1.33.0, cluster: 1.20.0 (minor skew: 13)
	I0510 17:52:43.975971 1044308 out.go:201] 
	W0510 17:52:43.977399 1044308 out.go:270] ! /usr/local/bin/kubectl is version 1.33.0, which may have incompatibilities with Kubernetes 1.20.0.
	I0510 17:52:43.978880 1044308 out.go:177]   - Want kubectl v1.20.0? Try 'minikube kubectl -- get pods -A'
	I0510 17:52:43.980215 1044308 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-697935" cluster and "default" namespace by default
	I0510 17:52:40.142629 1062960 cli_runner.go:164] Run: docker start newest-cni-173135
	I0510 17:52:40.387277 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:40.406155 1062960 kic.go:430] container "newest-cni-173135" state is running.
	I0510 17:52:40.406603 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:40.425434 1062960 profile.go:143] Saving config to /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/config.json ...
	I0510 17:52:40.425733 1062960 machine.go:93] provisionDockerMachine start ...
	I0510 17:52:40.425813 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:40.446701 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:40.446942 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:40.446954 1062960 main.go:141] libmachine: About to run SSH command:
	hostname
	I0510 17:52:40.447629 1062960 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38662->127.0.0.1:33504: read: connection reset by peer
	I0510 17:52:43.567334 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-173135
	
	I0510 17:52:43.567369 1062960 ubuntu.go:169] provisioning hostname "newest-cni-173135"
	I0510 17:52:43.567474 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:43.585810 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:43.586092 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:43.586114 1062960 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-173135 && echo "newest-cni-173135" | sudo tee /etc/hostname
	I0510 17:52:43.720075 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-173135
	
	I0510 17:52:43.720180 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:43.738458 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:43.738683 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:43.738700 1062960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-173135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-173135/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-173135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0510 17:52:43.860357 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0510 17:52:43.860392 1062960 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20720-722920/.minikube CaCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20720-722920/.minikube}
	I0510 17:52:43.860425 1062960 ubuntu.go:177] setting up certificates
	I0510 17:52:43.860438 1062960 provision.go:84] configureAuth start
	I0510 17:52:43.860501 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:43.878837 1062960 provision.go:143] copyHostCerts
	I0510 17:52:43.878913 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem, removing ...
	I0510 17:52:43.878934 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem
	I0510 17:52:43.879010 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/ca.pem (1078 bytes)
	I0510 17:52:43.879140 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem, removing ...
	I0510 17:52:43.879154 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem
	I0510 17:52:43.879187 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/cert.pem (1123 bytes)
	I0510 17:52:43.879281 1062960 exec_runner.go:144] found /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem, removing ...
	I0510 17:52:43.879293 1062960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem
	I0510 17:52:43.879328 1062960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20720-722920/.minikube/key.pem (1675 bytes)
	I0510 17:52:43.879447 1062960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem org=jenkins.newest-cni-173135 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-173135]
	I0510 17:52:44.399990 1062960 provision.go:177] copyRemoteCerts
	I0510 17:52:44.400060 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0510 17:52:44.400097 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.417363 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:44.509498 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0510 17:52:44.533816 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0510 17:52:44.556664 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0510 17:52:44.579844 1062960 provision.go:87] duration metric: took 719.387116ms to configureAuth
	I0510 17:52:44.579874 1062960 ubuntu.go:193] setting minikube options for container-runtime
	I0510 17:52:44.580082 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:44.580225 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.597779 1062960 main.go:141] libmachine: Using SSH client type: native
	I0510 17:52:44.597997 1062960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836380] 0x839080 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0510 17:52:44.598015 1062960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0510 17:52:44.861571 1062960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0510 17:52:44.861603 1062960 machine.go:96] duration metric: took 4.435849898s to provisionDockerMachine
	I0510 17:52:44.861615 1062960 start.go:293] postStartSetup for "newest-cni-173135" (driver="docker")
	I0510 17:52:44.861633 1062960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0510 17:52:44.861696 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0510 17:52:44.861741 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:44.880393 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:44.968863 1062960 ssh_runner.go:195] Run: cat /etc/os-release
	I0510 17:52:44.972444 1062960 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0510 17:52:44.972471 1062960 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0510 17:52:44.972479 1062960 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0510 17:52:44.972486 1062960 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0510 17:52:44.972499 1062960 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/addons for local assets ...
	I0510 17:52:44.972551 1062960 filesync.go:126] Scanning /home/jenkins/minikube-integration/20720-722920/.minikube/files for local assets ...
	I0510 17:52:44.972632 1062960 filesync.go:149] local asset: /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem -> 7298152.pem in /etc/ssl/certs
	I0510 17:52:44.972715 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0510 17:52:44.981250 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem --> /etc/ssl/certs/7298152.pem (1708 bytes)
	I0510 17:52:45.004513 1062960 start.go:296] duration metric: took 142.88043ms for postStartSetup
	I0510 17:52:45.004636 1062960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 17:52:45.004699 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.022563 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.108643 1062960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0510 17:52:45.113165 1062960 fix.go:56] duration metric: took 4.992266927s for fixHost
	I0510 17:52:45.113190 1062960 start.go:83] releasing machines lock for "newest-cni-173135", held for 4.992317581s
	I0510 17:52:45.113270 1062960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-173135
	I0510 17:52:45.130656 1062960 ssh_runner.go:195] Run: cat /version.json
	I0510 17:52:45.130728 1062960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0510 17:52:45.130785 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.130732 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:45.149250 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.153557 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:45.235894 1062960 ssh_runner.go:195] Run: systemctl --version
	I0510 17:52:45.328928 1062960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0510 17:52:45.467882 1062960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0510 17:52:45.472485 1062960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 17:52:45.480914 1062960 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0510 17:52:45.480989 1062960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0510 17:52:45.489392 1062960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0510 17:52:45.489423 1062960 start.go:495] detecting cgroup driver to use...
	I0510 17:52:45.489464 1062960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0510 17:52:45.489535 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0510 17:52:45.501274 1062960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0510 17:52:45.512452 1062960 docker.go:225] disabling cri-docker service (if available) ...
	I0510 17:52:45.512528 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0510 17:52:45.524828 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0510 17:52:45.535636 1062960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0510 17:52:45.618303 1062960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0510 17:52:45.695586 1062960 docker.go:241] disabling docker service ...
	I0510 17:52:45.695664 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0510 17:52:45.707968 1062960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0510 17:52:45.719029 1062960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0510 17:52:45.800197 1062960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0510 17:52:45.887455 1062960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0510 17:52:45.898860 1062960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0510 17:52:45.914760 1062960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0510 17:52:45.914818 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.924202 1062960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0510 17:52:45.924260 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.933839 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.944911 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.954202 1062960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0510 17:52:45.962950 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.972583 1062960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.981599 1062960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0510 17:52:45.991016 1062960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0510 17:52:45.999017 1062960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0510 17:52:46.007316 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:46.090516 1062960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0510 17:52:46.208208 1062960 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0510 17:52:46.208290 1062960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0510 17:52:46.212169 1062960 start.go:563] Will wait 60s for crictl version
	I0510 17:52:46.212233 1062960 ssh_runner.go:195] Run: which crictl
	I0510 17:52:46.215714 1062960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0510 17:52:46.250179 1062960 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0510 17:52:46.250256 1062960 ssh_runner.go:195] Run: crio --version
	I0510 17:52:46.286288 1062960 ssh_runner.go:195] Run: crio --version
	I0510 17:52:46.324763 1062960 out.go:177] * Preparing Kubernetes v1.33.0 on CRI-O 1.24.6 ...
	I0510 17:52:46.326001 1062960 cli_runner.go:164] Run: docker network inspect newest-cni-173135 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0510 17:52:46.342321 1062960 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0510 17:52:46.346220 1062960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:52:46.358987 1062960 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0510 17:52:46.360438 1062960 kubeadm.go:875] updating cluster {Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0510 17:52:46.360585 1062960 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
	I0510 17:52:46.360654 1062960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:52:46.402300 1062960 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 17:52:46.402322 1062960 crio.go:433] Images already preloaded, skipping extraction
	I0510 17:52:46.402371 1062960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0510 17:52:46.438279 1062960 crio.go:514] all images are preloaded for cri-o runtime.
	I0510 17:52:46.438310 1062960 cache_images.go:84] Images are preloaded, skipping loading
	I0510 17:52:46.438321 1062960 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.33.0 crio true true} ...
	I0510 17:52:46.438480 1062960 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.33.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-173135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0510 17:52:46.438582 1062960 ssh_runner.go:195] Run: crio config
	I0510 17:52:46.483257 1062960 cni.go:84] Creating CNI manager for ""
	I0510 17:52:46.483281 1062960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 17:52:46.483292 1062960 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0510 17:52:46.483315 1062960 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.33.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-173135 NodeName:newest-cni-173135 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0510 17:52:46.483479 1062960 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-173135"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.33.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0510 17:52:46.483553 1062960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.33.0
	I0510 17:52:46.492414 1062960 binaries.go:44] Found k8s binaries, skipping transfer
	I0510 17:52:46.492500 1062960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0510 17:52:46.501119 1062960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0510 17:52:46.518140 1062960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0510 17:52:46.535112 1062960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0510 17:52:46.551871 1062960 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0510 17:52:46.555171 1062960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0510 17:52:46.565729 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:46.652845 1062960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:52:46.666063 1062960 certs.go:68] Setting up /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135 for IP: 192.168.94.2
	I0510 17:52:46.666087 1062960 certs.go:194] generating shared ca certs ...
	I0510 17:52:46.666108 1062960 certs.go:226] acquiring lock for ca certs: {Name:mk27922925b9822e089551ad68cc2984cd622bc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:46.666267 1062960 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key
	I0510 17:52:46.666346 1062960 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key
	I0510 17:52:46.666367 1062960 certs.go:256] generating profile certs ...
	I0510 17:52:46.666488 1062960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/client.key
	I0510 17:52:46.666575 1062960 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.key.eac5560e
	I0510 17:52:46.666638 1062960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.key
	I0510 17:52:46.666788 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815.pem (1338 bytes)
	W0510 17:52:46.666836 1062960 certs.go:480] ignoring /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815_empty.pem, impossibly tiny 0 bytes
	I0510 17:52:46.666855 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca-key.pem (1679 bytes)
	I0510 17:52:46.666891 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/ca.pem (1078 bytes)
	I0510 17:52:46.666924 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/cert.pem (1123 bytes)
	I0510 17:52:46.666954 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/certs/key.pem (1675 bytes)
	I0510 17:52:46.667014 1062960 certs.go:484] found cert: /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem (1708 bytes)
	I0510 17:52:46.667736 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0510 17:52:46.694046 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0510 17:52:46.720567 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0510 17:52:46.750803 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0510 17:52:46.783126 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0510 17:52:46.861172 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0510 17:52:46.886437 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0510 17:52:46.909743 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/newest-cni-173135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0510 17:52:46.932746 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/ssl/certs/7298152.pem --> /usr/share/ca-certificates/7298152.pem (1708 bytes)
	I0510 17:52:46.955864 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0510 17:52:46.978875 1062960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20720-722920/.minikube/certs/729815.pem --> /usr/share/ca-certificates/729815.pem (1338 bytes)
	I0510 17:52:47.001846 1062960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0510 17:52:47.018936 1062960 ssh_runner.go:195] Run: openssl version
	I0510 17:52:47.024207 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0510 17:52:47.033345 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.036756 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 10 16:54 /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.036814 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0510 17:52:47.043306 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0510 17:52:47.051810 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/729815.pem && ln -fs /usr/share/ca-certificates/729815.pem /etc/ssl/certs/729815.pem"
	I0510 17:52:47.060972 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.064315 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 10 17:06 /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.064361 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/729815.pem
	I0510 17:52:47.070986 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/729815.pem /etc/ssl/certs/51391683.0"
	I0510 17:52:47.079952 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7298152.pem && ln -fs /usr/share/ca-certificates/7298152.pem /etc/ssl/certs/7298152.pem"
	I0510 17:52:47.089676 1062960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.093441 1062960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 10 17:06 /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.093504 1062960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7298152.pem
	I0510 17:52:47.100198 1062960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7298152.pem /etc/ssl/certs/3ec20f2e.0"
	I0510 17:52:47.108827 1062960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0510 17:52:47.112497 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0510 17:52:47.119081 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0510 17:52:47.125525 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0510 17:52:47.131948 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0510 17:52:47.138247 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0510 17:52:47.145052 1062960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0510 17:52:47.152189 1062960 kubeadm.go:392] StartCluster: {Name:newest-cni-173135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:newest-cni-173135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:52:47.152299 1062960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0510 17:52:47.152356 1062960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0510 17:52:47.190954 1062960 cri.go:89] found id: ""
	I0510 17:52:47.191057 1062960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0510 17:52:47.200662 1062960 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0510 17:52:47.200683 1062960 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0510 17:52:47.200729 1062960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0510 17:52:47.210371 1062960 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0510 17:52:47.211583 1062960 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-173135" does not appear in /home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:47.212205 1062960 kubeconfig.go:62] /home/jenkins/minikube-integration/20720-722920/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-173135" cluster setting kubeconfig missing "newest-cni-173135" context setting]
	I0510 17:52:47.213167 1062960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/kubeconfig: {Name:mk9fb87a04495b85d7d2d831cf7e181b64e065fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.215451 1062960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0510 17:52:47.225765 1062960 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0510 17:52:47.225809 1062960 kubeadm.go:593] duration metric: took 25.118512ms to restartPrimaryControlPlane
	I0510 17:52:47.225823 1062960 kubeadm.go:394] duration metric: took 73.645898ms to StartCluster
	I0510 17:52:47.225844 1062960 settings.go:142] acquiring lock: {Name:mkb5ef074e3901ac961cf1a29314fa6c725c1890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.225925 1062960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:52:47.227600 1062960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20720-722920/kubeconfig: {Name:mk9fb87a04495b85d7d2d831cf7e181b64e065fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0510 17:52:47.227929 1062960 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0510 17:52:47.228146 1062960 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0510 17:52:47.228262 1062960 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-173135"
	I0510 17:52:47.228286 1062960 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-173135"
	W0510 17:52:47.228300 1062960 addons.go:247] addon storage-provisioner should already be in state true
	I0510 17:52:47.228322 1062960 config.go:182] Loaded profile config "newest-cni-173135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:52:47.228340 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.228374 1062960 addons.go:69] Setting default-storageclass=true in profile "newest-cni-173135"
	I0510 17:52:47.228389 1062960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-173135"
	I0510 17:52:47.228696 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.228794 1062960 addons.go:69] Setting metrics-server=true in profile "newest-cni-173135"
	I0510 17:52:47.228819 1062960 addons.go:238] Setting addon metrics-server=true in "newest-cni-173135"
	W0510 17:52:47.228830 1062960 addons.go:247] addon metrics-server should already be in state true
	I0510 17:52:47.228871 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.228905 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.229098 1062960 addons.go:69] Setting dashboard=true in profile "newest-cni-173135"
	I0510 17:52:47.229122 1062960 addons.go:238] Setting addon dashboard=true in "newest-cni-173135"
	W0510 17:52:47.229131 1062960 addons.go:247] addon dashboard should already be in state true
	I0510 17:52:47.229160 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.229350 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.229636 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.231952 1062960 out.go:177] * Verifying Kubernetes components...
	I0510 17:52:47.233708 1062960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0510 17:52:47.257836 1062960 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0510 17:52:47.259786 1062960 addons.go:238] Setting addon default-storageclass=true in "newest-cni-173135"
	W0510 17:52:47.259808 1062960 addons.go:247] addon default-storageclass should already be in state true
	I0510 17:52:47.259842 1062960 host.go:66] Checking if "newest-cni-173135" exists ...
	I0510 17:52:47.260502 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0510 17:52:47.260520 1062960 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0510 17:52:47.260587 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.260894 1062960 cli_runner.go:164] Run: docker container inspect newest-cni-173135 --format={{.State.Status}}
	I0510 17:52:47.269485 1062960 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0510 17:52:47.270561 1062960 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0510 17:52:47.271826 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0510 17:52:47.271848 1062960 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0510 17:52:47.271913 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.273848 1062960 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0510 17:52:47.275490 1062960 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:52:47.275521 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0510 17:52:47.275721 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.287652 1062960 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:47.287676 1062960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0510 17:52:47.287737 1062960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-173135
	I0510 17:52:47.300295 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.308088 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.314958 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.317183 1062960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/newest-cni-173135/id_rsa Username:docker}
	I0510 17:52:47.570630 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:47.644300 1062960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0510 17:52:47.648111 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0510 17:52:47.648144 1062960 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0510 17:52:47.745020 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0510 17:52:47.745054 1062960 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0510 17:52:47.746206 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0510 17:52:47.753235 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0510 17:52:47.753267 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0510 17:52:47.852275 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0510 17:52:47.852309 1062960 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0510 17:52:47.854261 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0510 17:52:47.854291 1062960 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0510 17:52:47.957529 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0510 17:52:47.957561 1062960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0510 17:52:47.962427 1062960 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:52:47.962453 1062960 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0510 17:52:47.967141 1062960 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0510 17:52:47.967185 1062960 retry.go:31] will retry after 329.411117ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0510 17:52:47.967271 1062960 api_server.go:52] waiting for apiserver process to appear ...
	I0510 17:52:47.967381 1062960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 17:52:48.055318 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0510 17:52:48.055400 1062960 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0510 17:52:48.060787 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0510 17:52:48.149914 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0510 17:52:48.149947 1062960 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0510 17:52:48.175035 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0510 17:52:48.175070 1062960 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0510 17:52:48.263718 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0510 17:52:48.263750 1062960 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0510 17:52:48.282195 1062960 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 17:52:48.282227 1062960 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0510 17:52:48.297636 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0510 17:52:48.359369 1062960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0510 17:52:52.345196 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.598944537s)
	I0510 17:52:52.345534 1062960 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.378119806s)
	I0510 17:52:52.345610 1062960 api_server.go:72] duration metric: took 5.117639828s to wait for apiserver process to appear ...
	I0510 17:52:52.345622 1062960 api_server.go:88] waiting for apiserver healthz status ...
	I0510 17:52:52.345683 1062960 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0510 17:52:52.350659 1062960 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0510 17:52:52.350693 1062960 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0510 17:52:52.462305 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.401465129s)
	I0510 17:52:52.462425 1062960 addons.go:479] Verifying addon metrics-server=true in "newest-cni-173135"
	I0510 17:52:52.462366 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.164694895s)
	I0510 17:52:52.558877 1062960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.33.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.199364581s)
	I0510 17:52:52.560719 1062960 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-173135 addons enable metrics-server
	
	I0510 17:52:52.562364 1062960 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0510 17:52:52.563698 1062960 addons.go:514] duration metric: took 5.33556927s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0510 17:52:52.846151 1062960 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0510 17:52:52.850590 1062960 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0510 17:52:52.851935 1062960 api_server.go:141] control plane version: v1.33.0
	I0510 17:52:52.851968 1062960 api_server.go:131] duration metric: took 506.335848ms to wait for apiserver health ...
	I0510 17:52:52.851979 1062960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0510 17:52:52.855964 1062960 system_pods.go:59] 9 kube-system pods found
	I0510 17:52:52.856013 1062960 system_pods.go:61] "coredns-674b8bbfcf-l2m27" [11b63e72-35af-4a70-a7d3-b11e18104e2e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856039 1062960 system_pods.go:61] "etcd-newest-cni-173135" [60c35044-778d-45d4-8d96-e58efbd9b54b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0510 17:52:52.856062 1062960 system_pods.go:61] "kindnet-5nzlt" [9158a53c-5cd1-426c-a255-37618e292899] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0510 17:52:52.856073 1062960 system_pods.go:61] "kube-apiserver-newest-cni-173135" [790eeefa-f593-4148-b5f3-43bf9807166f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0510 17:52:52.856085 1062960 system_pods.go:61] "kube-controller-manager-newest-cni-173135" [75bdb232-66d8-442a-8566-34a3d4674876] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0510 17:52:52.856096 1062960 system_pods.go:61] "kube-proxy-v2tt7" [e502d755-4ecb-4567-9259-547f7c063830] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0510 17:52:52.856108 1062960 system_pods.go:61] "kube-scheduler-newest-cni-173135" [8bfc0953-197d-4185-b2e7-6e1a2d97a8df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0510 17:52:52.856117 1062960 system_pods.go:61] "metrics-server-f79f97bbb-z4g7z" [a6bcfd5e-6f32-43ef-a6e7-336c90faf9ff] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856125 1062960 system_pods.go:61] "storage-provisioner" [effda141-cd8d-4f87-97a1-9166c59e1de0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0510 17:52:52.856132 1062960 system_pods.go:74] duration metric: took 4.146105ms to wait for pod list to return data ...
	I0510 17:52:52.856143 1062960 default_sa.go:34] waiting for default service account to be created ...
	I0510 17:52:52.858633 1062960 default_sa.go:45] found service account: "default"
	I0510 17:52:52.858658 1062960 default_sa.go:55] duration metric: took 2.507165ms for default service account to be created ...
	I0510 17:52:52.858670 1062960 kubeadm.go:578] duration metric: took 5.630701473s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0510 17:52:52.858701 1062960 node_conditions.go:102] verifying NodePressure condition ...
	I0510 17:52:52.861375 1062960 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0510 17:52:52.861398 1062960 node_conditions.go:123] node cpu capacity is 8
	I0510 17:52:52.861411 1062960 node_conditions.go:105] duration metric: took 2.704535ms to run NodePressure ...
	I0510 17:52:52.861422 1062960 start.go:241] waiting for startup goroutines ...
	I0510 17:52:52.861431 1062960 start.go:246] waiting for cluster config update ...
	I0510 17:52:52.861444 1062960 start.go:255] writing updated cluster config ...
	I0510 17:52:52.861692 1062960 ssh_runner.go:195] Run: rm -f paused
	I0510 17:52:52.918445 1062960 start.go:607] kubectl: 1.33.0, cluster: 1.33.0 (minor skew: 0)
	I0510 17:52:52.920711 1062960 out.go:177] * Done! kubectl is now configured to use "newest-cni-173135" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 10 18:09:30 old-k8s-version-697935 crio[652]: time="2025-05-10 18:09:30.859090203Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b49b1ae2-16ec-4098-a8a8-8a300c0aeb14 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:09:41 old-k8s-version-697935 crio[652]: time="2025-05-10 18:09:41.858880447Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=41cef717-707a-40b0-b1cb-72919c55cc9f name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:09:41 old-k8s-version-697935 crio[652]: time="2025-05-10 18:09:41.859163157Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=41cef717-707a-40b0-b1cb-72919c55cc9f name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:09:44 old-k8s-version-697935 crio[652]: time="2025-05-10 18:09:44.858854520Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=16c2486d-9c6e-4b5a-a48a-59e46a1e08ea name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:09:44 old-k8s-version-697935 crio[652]: time="2025-05-10 18:09:44.859120576Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=16c2486d-9c6e-4b5a-a48a-59e46a1e08ea name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:09:54 old-k8s-version-697935 crio[652]: time="2025-05-10 18:09:54.858885629Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=a6f479a3-df26-4942-bd3c-f69e0546c646 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:09:54 old-k8s-version-697935 crio[652]: time="2025-05-10 18:09:54.859129531Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=a6f479a3-df26-4942-bd3c-f69e0546c646 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:09:57 old-k8s-version-697935 crio[652]: time="2025-05-10 18:09:57.858961187Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8b6bceef-9231-4e6c-81c9-fa06f0b7c67f name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:09:57 old-k8s-version-697935 crio[652]: time="2025-05-10 18:09:57.859319760Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=8b6bceef-9231-4e6c-81c9-fa06f0b7c67f name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:09 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:09.859438752Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=7a6e0cb2-c34f-4df1-bf33-55e60479671b name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:09 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:09.860325768Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=7a6e0cb2-c34f-4df1-bf33-55e60479671b name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:11 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:11.858859000Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ce9e5a32-ac77-496a-9a0d-24a96f8711b0 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:11 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:11.859251890Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=ce9e5a32-ac77-496a-9a0d-24a96f8711b0 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:22 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:22.858855473Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=db63f9ad-f6d4-4ada-8817-4cb7da19da16 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:22 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:22.858888818Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=531a7c99-3223-4c75-916a-9535f8054459 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:22 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:22.859149207Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=db63f9ad-f6d4-4ada-8817-4cb7da19da16 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:22 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:22.859240838Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=531a7c99-3223-4c75-916a-9535f8054459 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:33 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:33.858855096Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=9b9b2f95-9f6c-44b5-8175-76acc00e6584 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:33 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:33.859155950Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=9b9b2f95-9f6c-44b5-8175-76acc00e6584 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:36 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:36.858908355Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8b0a9209-8f5c-4c4e-9722-5ca280c95ba7 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:36 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:36.859172702Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=8b0a9209-8f5c-4c4e-9722-5ca280c95ba7 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:44 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:44.858750621Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=65c32e25-8b4f-43a6-9f78-1cbe959650e1 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:44 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:44.858992252Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=65c32e25-8b4f-43a6-9f78-1cbe959650e1 name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:47 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:47.858779066Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3d01b59b-9af0-44cd-b348-95d315ad25cc name=/runtime.v1alpha2.ImageService/ImageStatus
	May 10 18:10:47 old-k8s-version-697935 crio[652]: time="2025-05-10 18:10:47.859133001Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=3d01b59b-9af0-44cd-b348-95d315ad25cc name=/runtime.v1alpha2.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	2d476b135232d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      3 minutes ago       Exited              dashboard-metrics-scraper   8                   9084dcec4b23c       dashboard-metrics-scraper-8d5bb5db8-vt5c5
	7ff368bbd66a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner         1                   d496b56233a1d       storage-provisioner
	c96e9a182b388       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                     0                   12e6a90e06c7d       busybox
	85ae431b96297       docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495    19 minutes ago      Running             kindnet-cni                 0                   2e251720776d8       kindnet-n9r85
	51486f3a113ad       bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16                                      19 minutes ago      Running             coredns                     0                   f3e5024271b0b       coredns-74ff55c5b-c9gkr
	53760e48d7e9d       10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc                                      19 minutes ago      Running             kube-proxy                  0                   851b838d18916       kube-proxy-8tdw4
	592514d263d59       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner         0                   d496b56233a1d       storage-provisioner
	162ead39b3bd0       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                      19 minutes ago      Running             etcd                        0                   9b156512e5596       etcd-old-k8s-version-697935
	b8e9e23c661a6       3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899                                      19 minutes ago      Running             kube-scheduler              0                   d961a85ac011e       kube-scheduler-old-k8s-version-697935
	93b305fd7c4bf       b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080                                      19 minutes ago      Running             kube-controller-manager     0                   5845887f48634       kube-controller-manager-old-k8s-version-697935
	b74f559337208       ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99                                      19 minutes ago      Running             kube-apiserver              0                   05cdd901e82f2       kube-apiserver-old-k8s-version-697935
	
	
	==> coredns [51486f3a113adf2f4be53c43f2837f083c5b8bfaf0db20ec166be96f0b9f48d8] <==
	I0510 17:51:47.803202       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-05-10 17:51:17.802226451 +0000 UTC m=+0.202082203) (total time: 30.000896115s):
	Trace[2019727887]: [30.000896115s] [30.000896115s] END
	E0510 17:51:47.803228       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0510 17:51:47.803354       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-05-10 17:51:17.802387809 +0000 UTC m=+0.202243559) (total time: 30.000930239s):
	Trace[939984059]: [30.000930239s] [30.000930239s] END
	E0510 17:51:47.803371       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0510 17:51:47.803518       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-05-10 17:51:17.802529668 +0000 UTC m=+0.202385412) (total time: 30.000957284s):
	Trace[911902081]: [30.000957284s] [30.000957284s] END
	E0510 17:51:47.803530       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:56662 - 46607 "HINFO IN 515717139491869813.6170506681979060124. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.053337476s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:35492 - 59128 "HINFO IN 2607771096306316054.5838860855392937212. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.065924808s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-697935
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-697935
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e96c83983357cd8557f3cdfe077a25cc73d485a4
	                    minikube.k8s.io/name=old-k8s-version-697935
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_05_10T17_48_58_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 May 2025 17:48:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-697935
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 May 2025 18:10:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 May 2025 18:06:50 +0000   Sat, 10 May 2025 17:48:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 May 2025 18:06:50 +0000   Sat, 10 May 2025 17:48:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 May 2025 18:06:50 +0000   Sat, 10 May 2025 17:48:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 May 2025 18:06:50 +0000   Sat, 10 May 2025 17:49:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-697935
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859344Ki
	  pods:               110
	System Info:
	  Machine ID:                 eec0bda788b749f7970518dbe01a5319
	  System UUID:                8baa3264-9de9-4216-a70b-20564168beb1
	  Boot ID:                    cf43504f-fb83-4d4b-9ff6-27d975437043
	  Kernel Version:             5.15.0-1081-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-74ff55c5b-c9gkr                           100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     21m
	  kube-system                 etcd-old-k8s-version-697935                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         21m
	  kube-system                 kindnet-n9r85                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-apiserver-old-k8s-version-697935             250m (3%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-old-k8s-version-697935    200m (2%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-8tdw4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-old-k8s-version-697935             100m (1%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-9975d5f86-82bt9                    100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-vt5c5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-6bj6d               0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 21m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m                kubelet     Node old-k8s-version-697935 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet     Node old-k8s-version-697935 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet     Node old-k8s-version-697935 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                21m                kubelet     Node old-k8s-version-697935 status is now: NodeReady
	  Normal  Starting                 19m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet     Node old-k8s-version-697935 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet     Node old-k8s-version-697935 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x8 over 19m)  kubelet     Node old-k8s-version-697935 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +1.019813] net_ratelimit: 3 callbacks suppressed
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000003] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000002] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +4.095573] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000007] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000001] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ec25a068cacd
	[  +0.000002] ll header: 00000000: f2 3b 00 ef 29 61 8a 83 4d ad 3a 94 08 00
	[  +3.075626] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000001] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +0.000001] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c98d5c048caa
	[  +0.000002] ll header: 00000000: 86 cd 6a 20 a2 08 6e a7 b0 8c 49 ab 08 00
	[  +1.019906] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000006] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ba2829c5de69
	[  +0.000001] ll header: 00000000: be 1c 75 0e d6 77 e6 2c 5a 2f 2d 3a 08 00
	
	
	==> etcd [162ead39b3bd07ba0aad4c32cd0b64430e21f272ad99288e7abb418c3024e004] <==
	2025-05-10 18:06:44.709297 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:06:54.709281 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:07:04.709347 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:07:14.709249 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:07:24.709327 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:07:34.709333 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:07:44.709346 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:07:54.709252 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:08:04.709318 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:08:14.709182 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:08:24.709178 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:08:34.709293 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:08:44.709320 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:08:54.709306 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:09:04.709208 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:09:14.709348 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:09:24.709275 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:09:34.709283 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:09:44.709209 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:09:54.709303 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:10:04.709325 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:10:14.709216 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:10:24.709202 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:10:34.709364 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-05-10 18:10:44.709311 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 18:10:48 up  3:53,  0 users,  load average: 0.33, 0.54, 2.00
	Linux old-k8s-version-697935 5.15.0-1081-gcp #90~20.04.1-Ubuntu SMP Fri Apr 4 18:55:17 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [85ae431b96297b609864888406c19bb8709aa34cca8c804fe0d49328d5de00b5] <==
	I0510 18:08:41.751549       1 main.go:301] handling current node
	I0510 18:08:51.747513       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:08:51.747572       1 main.go:301] handling current node
	I0510 18:09:01.751560       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:09:01.751593       1 main.go:301] handling current node
	I0510 18:09:11.751504       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:09:11.751550       1 main.go:301] handling current node
	I0510 18:09:21.744459       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:09:21.744496       1 main.go:301] handling current node
	I0510 18:09:31.752149       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:09:31.752188       1 main.go:301] handling current node
	I0510 18:09:41.751505       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:09:41.751536       1 main.go:301] handling current node
	I0510 18:09:51.744937       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:09:51.744969       1 main.go:301] handling current node
	I0510 18:10:01.753161       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:10:01.753194       1 main.go:301] handling current node
	I0510 18:10:11.753267       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:10:11.753302       1 main.go:301] handling current node
	I0510 18:10:21.744081       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:10:21.744112       1 main.go:301] handling current node
	I0510 18:10:31.751521       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:10:31.751555       1 main.go:301] handling current node
	I0510 18:10:41.753606       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0510 18:10:41.753637       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b74f559337208741c7d3afe5075d361e32a449f67ebc91a2c4249f7184a95bef] <==
	E0510 18:07:17.432305       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0510 18:07:17.432315       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 18:07:37.020746       1 client.go:360] parsed scheme: "passthrough"
	I0510 18:07:37.020791       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0510 18:07:37.020799       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0510 18:08:12.286716       1 client.go:360] parsed scheme: "passthrough"
	I0510 18:08:12.286764       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0510 18:08:12.286774       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0510 18:08:43.608352       1 client.go:360] parsed scheme: "passthrough"
	I0510 18:08:43.608399       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0510 18:08:43.608407       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0510 18:09:17.432553       1 handler_proxy.go:102] no RequestInfo found in the context
	E0510 18:09:17.432626       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0510 18:09:17.432635       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0510 18:09:22.721769       1 client.go:360] parsed scheme: "passthrough"
	I0510 18:09:22.721814       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0510 18:09:22.721822       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0510 18:10:03.704929       1 client.go:360] parsed scheme: "passthrough"
	I0510 18:10:03.704981       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0510 18:10:03.704990       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0510 18:10:37.246639       1 client.go:360] parsed scheme: "passthrough"
	I0510 18:10:37.246684       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0510 18:10:37.246694       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [93b305fd7c4bf90549012111c5bf582d7bea58f61b982b0ea0ab95b4603c5ab3] <==
	E0510 18:06:19.390146       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0510 18:06:45.176351       1 request.go:655] Throttling request took 1.048625823s, request: GET:https://192.168.103.2:8443/apis/policy/v1beta1?timeout=32s
	W0510 18:06:46.027729       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0510 18:06:49.891562       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0510 18:07:17.678086       1 request.go:655] Throttling request took 1.048727399s, request: GET:https://192.168.103.2:8443/apis/apps/v1?timeout=32s
	W0510 18:07:18.529405       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0510 18:07:20.393229       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0510 18:07:50.179553       1 request.go:655] Throttling request took 1.048461548s, request: GET:https://192.168.103.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	E0510 18:07:50.894633       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0510 18:07:51.030753       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0510 18:08:21.396356       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0510 18:08:22.681195       1 request.go:655] Throttling request took 1.048664283s, request: GET:https://192.168.103.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
	W0510 18:08:23.532343       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0510 18:08:51.897937       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0510 18:08:55.182860       1 request.go:655] Throttling request took 1.048637696s, request: GET:https://192.168.103.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0510 18:08:56.033992       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0510 18:09:22.399688       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0510 18:09:27.684299       1 request.go:655] Throttling request took 1.04872387s, request: GET:https://192.168.103.2:8443/apis/apps/v1?timeout=32s
	W0510 18:09:28.535161       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0510 18:09:52.901232       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0510 18:10:00.185579       1 request.go:655] Throttling request took 1.048660613s, request: GET:https://192.168.103.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
	W0510 18:10:01.036689       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0510 18:10:23.402966       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0510 18:10:32.686961       1 request.go:655] Throttling request took 1.048713431s, request: GET:https://192.168.103.2:8443/apis/apiregistration.k8s.io/v1beta1?timeout=32s
	W0510 18:10:33.538280       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [53760e48d7e9da22f6aa6e6dbe00df0e633cfea641309fdc64339e399ea491e5] <==
	I0510 17:49:14.175845       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0510 17:49:14.175928       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0510 17:49:14.268358       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0510 17:49:14.268473       1 server_others.go:185] Using iptables Proxier.
	I0510 17:49:14.268779       1 server.go:650] Version: v1.20.0
	I0510 17:49:14.269335       1 config.go:315] Starting service config controller
	I0510 17:49:14.269352       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0510 17:49:14.269435       1 config.go:224] Starting endpoint slice config controller
	I0510 17:49:14.269566       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0510 17:49:14.369653       1 shared_informer.go:247] Caches are synced for service config 
	I0510 17:49:14.374408       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0510 17:51:17.944562       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0510 17:51:17.944752       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0510 17:51:17.978812       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0510 17:51:17.978987       1 server_others.go:185] Using iptables Proxier.
	I0510 17:51:17.979335       1 server.go:650] Version: v1.20.0
	I0510 17:51:17.980427       1 config.go:315] Starting service config controller
	I0510 17:51:17.980485       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0510 17:51:17.980553       1 config.go:224] Starting endpoint slice config controller
	I0510 17:51:17.980584       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0510 17:51:18.082130       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0510 17:51:18.082297       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [b8e9e23c661a6fa84bb7f1f46dd1e3b80bd48542b9bbbfde79c96f06e70425b5] <==
	E0510 17:48:54.756903       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0510 17:48:54.757151       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0510 17:48:54.757754       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0510 17:48:55.574155       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0510 17:48:55.604883       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0510 17:48:55.618505       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0510 17:48:55.676765       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0510 17:48:55.679964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0510 17:48:55.744488       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0510 17:48:55.745288       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0510 17:48:55.800324       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0510 17:48:55.846155       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0510 17:48:58.250713       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0510 17:51:11.495408       1 serving.go:331] Generated self-signed cert in-memory
	I0510 17:51:16.763501       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0510 17:51:16.763531       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0510 17:51:16.763561       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0510 17:51:16.763566       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0510 17:51:16.763593       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0510 17:51:16.763597       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0510 17:51:16.763852       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0510 17:51:16.763933       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0510 17:51:16.872560       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	I0510 17:51:16.872701       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
	I0510 17:51:16.965367       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	May 10 18:09:30 old-k8s-version-697935 kubelet[1198]: E0510 18:09:30.859362    1198 pod_workers.go:191] Error syncing pod 8dfa2561-0fd4-4df5-93e1-f807fe41266a ("kubernetes-dashboard-cd95d586-6bj6d_kubernetes-dashboard(8dfa2561-0fd4-4df5-93e1-f807fe41266a)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with ImagePullBackOff: "Back-off pulling image \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 18:09:37 old-k8s-version-697935 kubelet[1198]: I0510 18:09:37.858392    1198 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2d476b135232db6d61341230d8b464e7641b5585136a9537e7b93d9efdc5c075
	May 10 18:09:37 old-k8s-version-697935 kubelet[1198]: E0510 18:09:37.858683    1198 pod_workers.go:191] Error syncing pod 0c45a349-f180-4fb6-b199-62c35f286b01 ("dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"
	May 10 18:09:41 old-k8s-version-697935 kubelet[1198]: E0510 18:09:41.859382    1198 pod_workers.go:191] Error syncing pod a5ed34c7-3a50-499a-99db-059e22fe8837 ("metrics-server-9975d5f86-82bt9_kube-system(a5ed34c7-3a50-499a-99db-059e22fe8837)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 10 18:09:44 old-k8s-version-697935 kubelet[1198]: E0510 18:09:44.859486    1198 pod_workers.go:191] Error syncing pod 8dfa2561-0fd4-4df5-93e1-f807fe41266a ("kubernetes-dashboard-cd95d586-6bj6d_kubernetes-dashboard(8dfa2561-0fd4-4df5-93e1-f807fe41266a)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with ImagePullBackOff: "Back-off pulling image \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 18:09:48 old-k8s-version-697935 kubelet[1198]: I0510 18:09:48.858372    1198 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2d476b135232db6d61341230d8b464e7641b5585136a9537e7b93d9efdc5c075
	May 10 18:09:48 old-k8s-version-697935 kubelet[1198]: E0510 18:09:48.858842    1198 pod_workers.go:191] Error syncing pod 0c45a349-f180-4fb6-b199-62c35f286b01 ("dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"
	May 10 18:09:54 old-k8s-version-697935 kubelet[1198]: E0510 18:09:54.859450    1198 pod_workers.go:191] Error syncing pod a5ed34c7-3a50-499a-99db-059e22fe8837 ("metrics-server-9975d5f86-82bt9_kube-system(a5ed34c7-3a50-499a-99db-059e22fe8837)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 10 18:09:57 old-k8s-version-697935 kubelet[1198]: E0510 18:09:57.859570    1198 pod_workers.go:191] Error syncing pod 8dfa2561-0fd4-4df5-93e1-f807fe41266a ("kubernetes-dashboard-cd95d586-6bj6d_kubernetes-dashboard(8dfa2561-0fd4-4df5-93e1-f807fe41266a)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with ImagePullBackOff: "Back-off pulling image \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 18:10:03 old-k8s-version-697935 kubelet[1198]: I0510 18:10:03.858583    1198 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2d476b135232db6d61341230d8b464e7641b5585136a9537e7b93d9efdc5c075
	May 10 18:10:03 old-k8s-version-697935 kubelet[1198]: E0510 18:10:03.859015    1198 pod_workers.go:191] Error syncing pod 0c45a349-f180-4fb6-b199-62c35f286b01 ("dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"
	May 10 18:10:09 old-k8s-version-697935 kubelet[1198]: E0510 18:10:09.861656    1198 pod_workers.go:191] Error syncing pod a5ed34c7-3a50-499a-99db-059e22fe8837 ("metrics-server-9975d5f86-82bt9_kube-system(a5ed34c7-3a50-499a-99db-059e22fe8837)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 10 18:10:11 old-k8s-version-697935 kubelet[1198]: E0510 18:10:11.859485    1198 pod_workers.go:191] Error syncing pod 8dfa2561-0fd4-4df5-93e1-f807fe41266a ("kubernetes-dashboard-cd95d586-6bj6d_kubernetes-dashboard(8dfa2561-0fd4-4df5-93e1-f807fe41266a)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with ImagePullBackOff: "Back-off pulling image \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 18:10:15 old-k8s-version-697935 kubelet[1198]: I0510 18:10:15.858726    1198 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2d476b135232db6d61341230d8b464e7641b5585136a9537e7b93d9efdc5c075
	May 10 18:10:15 old-k8s-version-697935 kubelet[1198]: E0510 18:10:15.859108    1198 pod_workers.go:191] Error syncing pod 0c45a349-f180-4fb6-b199-62c35f286b01 ("dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"
	May 10 18:10:22 old-k8s-version-697935 kubelet[1198]: E0510 18:10:22.859411    1198 pod_workers.go:191] Error syncing pod 8dfa2561-0fd4-4df5-93e1-f807fe41266a ("kubernetes-dashboard-cd95d586-6bj6d_kubernetes-dashboard(8dfa2561-0fd4-4df5-93e1-f807fe41266a)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with ImagePullBackOff: "Back-off pulling image \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 18:10:22 old-k8s-version-697935 kubelet[1198]: E0510 18:10:22.859410    1198 pod_workers.go:191] Error syncing pod a5ed34c7-3a50-499a-99db-059e22fe8837 ("metrics-server-9975d5f86-82bt9_kube-system(a5ed34c7-3a50-499a-99db-059e22fe8837)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 10 18:10:27 old-k8s-version-697935 kubelet[1198]: I0510 18:10:27.858440    1198 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2d476b135232db6d61341230d8b464e7641b5585136a9537e7b93d9efdc5c075
	May 10 18:10:27 old-k8s-version-697935 kubelet[1198]: E0510 18:10:27.858884    1198 pod_workers.go:191] Error syncing pod 0c45a349-f180-4fb6-b199-62c35f286b01 ("dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"
	May 10 18:10:33 old-k8s-version-697935 kubelet[1198]: E0510 18:10:33.859371    1198 pod_workers.go:191] Error syncing pod a5ed34c7-3a50-499a-99db-059e22fe8837 ("metrics-server-9975d5f86-82bt9_kube-system(a5ed34c7-3a50-499a-99db-059e22fe8837)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 10 18:10:36 old-k8s-version-697935 kubelet[1198]: E0510 18:10:36.859451    1198 pod_workers.go:191] Error syncing pod 8dfa2561-0fd4-4df5-93e1-f807fe41266a ("kubernetes-dashboard-cd95d586-6bj6d_kubernetes-dashboard(8dfa2561-0fd4-4df5-93e1-f807fe41266a)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with ImagePullBackOff: "Back-off pulling image \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	May 10 18:10:42 old-k8s-version-697935 kubelet[1198]: I0510 18:10:42.858333    1198 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2d476b135232db6d61341230d8b464e7641b5585136a9537e7b93d9efdc5c075
	May 10 18:10:42 old-k8s-version-697935 kubelet[1198]: E0510 18:10:42.858607    1198 pod_workers.go:191] Error syncing pod 0c45a349-f180-4fb6-b199-62c35f286b01 ("dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-vt5c5_kubernetes-dashboard(0c45a349-f180-4fb6-b199-62c35f286b01)"
	May 10 18:10:44 old-k8s-version-697935 kubelet[1198]: E0510 18:10:44.859254    1198 pod_workers.go:191] Error syncing pod a5ed34c7-3a50-499a-99db-059e22fe8837 ("metrics-server-9975d5f86-82bt9_kube-system(a5ed34c7-3a50-499a-99db-059e22fe8837)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 10 18:10:47 old-k8s-version-697935 kubelet[1198]: E0510 18:10:47.859371    1198 pod_workers.go:191] Error syncing pod 8dfa2561-0fd4-4df5-93e1-f807fe41266a ("kubernetes-dashboard-cd95d586-6bj6d_kubernetes-dashboard(8dfa2561-0fd4-4df5-93e1-f807fe41266a)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with ImagePullBackOff: "Back-off pulling image \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	
	
	==> storage-provisioner [592514d263d59d3d9ed18bc51963bdd8df639168346e410f3424186ff72fc2c7] <==
	I0510 17:49:38.477578       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0510 17:49:38.489302       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0510 17:49:38.489346       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0510 17:49:38.502387       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0510 17:49:38.502694       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-697935_2320ec47-387a-4f04-b624-7088d7268c3d!
	I0510 17:49:38.502771       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"979ad0ce-9512-4fa4-88fe-42fe076ce8b8", APIVersion:"v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-697935_2320ec47-387a-4f04-b624-7088d7268c3d became leader
	I0510 17:49:38.603110       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-697935_2320ec47-387a-4f04-b624-7088d7268c3d!
	I0510 17:51:17.773328       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0510 17:51:47.776812       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7ff368bbd66a425974db0746bf2ed56b83b99a060d446470e7f4227046f9dc76] <==
	I0510 17:51:48.150087       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0510 17:51:48.161156       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0510 17:51:48.161214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0510 17:52:05.588741       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0510 17:52:05.588922       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-697935_18a6cee5-9c94-47f4-aec5-4878384bdfdf!
	I0510 17:52:05.588874       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"979ad0ce-9512-4fa4-88fe-42fe076ce8b8", APIVersion:"v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-697935_18a6cee5-9c94-47f4-aec5-4878384bdfdf became leader
	I0510 17:52:05.689934       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-697935_18a6cee5-9c94-47f4-aec5-4878384bdfdf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-697935 -n old-k8s-version-697935
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-697935 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-82bt9 kubernetes-dashboard-cd95d586-6bj6d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-697935 describe pod metrics-server-9975d5f86-82bt9 kubernetes-dashboard-cd95d586-6bj6d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-697935 describe pod metrics-server-9975d5f86-82bt9 kubernetes-dashboard-cd95d586-6bj6d: exit status 1 (55.124033ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-82bt9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-cd95d586-6bj6d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-697935 describe pod metrics-server-9975d5f86-82bt9 kubernetes-dashboard-cd95d586-6bj6d: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.37s)

                                                
                                    

Test pass (290/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.91
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.33.0/json-events 4.15
13 TestDownloadOnly/v1.33.0/preload-exists 0
17 TestDownloadOnly/v1.33.0/LogsDuration 0.06
18 TestDownloadOnly/v1.33.0/DeleteAll 0.21
19 TestDownloadOnly/v1.33.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.17
21 TestBinaryMirror 0.82
22 TestOffline 84.96
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 149.08
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.53
35 TestAddons/parallel/Registry 13.94
37 TestAddons/parallel/InspektorGadget 10.66
38 TestAddons/parallel/MetricsServer 6.07
41 TestAddons/parallel/Headlamp 18.05
42 TestAddons/parallel/CloudSpanner 5.49
43 TestAddons/parallel/LocalPath 50.58
44 TestAddons/parallel/NvidiaDevicePlugin 5.71
45 TestAddons/parallel/Yakd 11.8
46 TestAddons/parallel/AmdGpuDevicePlugin 6.52
47 TestAddons/StoppedEnableDisable 12.14
48 TestCertOptions 29.49
49 TestCertExpiration 231.36
51 TestForceSystemdFlag 39.74
52 TestForceSystemdEnv 36.18
54 TestKVMDriverInstallOrUpdate 3.53
58 TestErrorSpam/setup 25.04
59 TestErrorSpam/start 0.58
60 TestErrorSpam/status 0.86
61 TestErrorSpam/pause 1.51
62 TestErrorSpam/unpause 1.69
63 TestErrorSpam/stop 1.42
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 70.2
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 24.64
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.11
75 TestFunctional/serial/CacheCmd/cache/add_local 1.29
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 32.27
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.37
86 TestFunctional/serial/LogsFileCmd 1.38
87 TestFunctional/serial/InvalidService 4.04
89 TestFunctional/parallel/ConfigCmd 0.42
91 TestFunctional/parallel/DryRun 0.54
92 TestFunctional/parallel/InternationalLanguage 0.24
93 TestFunctional/parallel/StatusCmd 1.11
97 TestFunctional/parallel/ServiceCmdConnect 8.65
98 TestFunctional/parallel/AddonsCmd 0.13
101 TestFunctional/parallel/SSHCmd 0.54
102 TestFunctional/parallel/CpCmd 2.02
104 TestFunctional/parallel/FileSync 0.25
105 TestFunctional/parallel/CertSync 1.68
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
113 TestFunctional/parallel/License 0.22
114 TestFunctional/parallel/ServiceCmd/DeployApp 9.21
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
116 TestFunctional/parallel/ProfileCmd/profile_list 0.51
117 TestFunctional/parallel/MountCmd/any-port 6.87
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
119 TestFunctional/parallel/Version/short 0.05
120 TestFunctional/parallel/Version/components 0.46
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
125 TestFunctional/parallel/ImageCommands/ImageBuild 2.04
126 TestFunctional/parallel/ImageCommands/Setup 1
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.21
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.22
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
133 TestFunctional/parallel/MountCmd/specific-port 1.89
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
135 TestFunctional/parallel/ServiceCmd/List 0.32
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
140 TestFunctional/parallel/MountCmd/VerifyCleanup 1.58
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
144 TestFunctional/parallel/ServiceCmd/Format 0.35
145 TestFunctional/parallel/ServiceCmd/URL 0.35
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 156.81
162 TestMultiControlPlane/serial/DeployApp 4.26
163 TestMultiControlPlane/serial/PingHostFromPods 1.08
164 TestMultiControlPlane/serial/AddWorkerNode 24.01
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
167 TestMultiControlPlane/serial/CopyFile 15.74
168 TestMultiControlPlane/serial/StopSecondaryNode 12.54
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
170 TestMultiControlPlane/serial/RestartSecondaryNode 20.32
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 116.45
173 TestMultiControlPlane/serial/DeleteSecondaryNode 12.25
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
175 TestMultiControlPlane/serial/StopCluster 35.58
176 TestMultiControlPlane/serial/RestartCluster 85
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
178 TestMultiControlPlane/serial/AddSecondaryNode 37.53
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
183 TestJSONOutput/start/Command 71.94
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.72
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.6
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.76
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
208 TestKicCustomNetwork/create_custom_network 33.37
209 TestKicCustomNetwork/use_default_bridge_network 27.37
210 TestKicExistingNetwork 24.24
211 TestKicCustomSubnet 25.22
212 TestKicStaticIP 27.14
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 52.26
217 TestMountStart/serial/StartWithMountFirst 5.18
218 TestMountStart/serial/VerifyMountFirst 0.24
219 TestMountStart/serial/StartWithMountSecond 5.23
220 TestMountStart/serial/VerifyMountSecond 0.24
221 TestMountStart/serial/DeleteFirst 1.58
222 TestMountStart/serial/VerifyMountPostDelete 0.24
223 TestMountStart/serial/Stop 1.17
224 TestMountStart/serial/RestartStopped 7.06
225 TestMountStart/serial/VerifyMountPostStop 0.24
228 TestMultiNode/serial/FreshStart2Nodes 94.19
229 TestMultiNode/serial/DeployApp2Nodes 3.39
230 TestMultiNode/serial/PingHostFrom2Pods 0.73
231 TestMultiNode/serial/AddNode 24.7
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.61
234 TestMultiNode/serial/CopyFile 8.92
235 TestMultiNode/serial/StopNode 2.08
236 TestMultiNode/serial/StartAfterStop 7.47
237 TestMultiNode/serial/RestartKeepsNodes 70.74
238 TestMultiNode/serial/DeleteNode 5.23
239 TestMultiNode/serial/StopMultiNode 23.72
240 TestMultiNode/serial/RestartMultiNode 51.11
241 TestMultiNode/serial/ValidateNameConflict 25.02
246 TestPreload 116.79
248 TestScheduledStopUnix 98.06
251 TestInsufficientStorage 9.99
252 TestRunningBinaryUpgrade 62.07
254 TestKubernetesUpgrade 202.54
255 TestMissingContainerUpgrade 130.03
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
258 TestStoppedBinaryUpgrade/Setup 0.4
259 TestNoKubernetes/serial/StartWithK8s 36.16
260 TestStoppedBinaryUpgrade/Upgrade 92.7
261 TestNoKubernetes/serial/StartWithStopK8s 12.31
262 TestNoKubernetes/serial/Start 7.62
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
264 TestNoKubernetes/serial/ProfileList 6.07
265 TestNoKubernetes/serial/Stop 1.22
266 TestNoKubernetes/serial/StartNoArgs 6.49
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
268 TestStoppedBinaryUpgrade/MinikubeLogs 1.29
276 TestNetworkPlugins/group/false 3.33
288 TestPause/serial/Start 39.54
289 TestNetworkPlugins/group/auto/Start 41.49
290 TestPause/serial/SecondStartNoReconfiguration 30.24
291 TestNetworkPlugins/group/auto/KubeletFlags 0.29
292 TestNetworkPlugins/group/auto/NetCatPod 9.21
293 TestPause/serial/Pause 0.68
294 TestPause/serial/VerifyStatus 0.28
295 TestPause/serial/Unpause 0.62
296 TestPause/serial/PauseAgain 0.74
297 TestPause/serial/DeletePaused 2.61
298 TestNetworkPlugins/group/auto/DNS 0.14
299 TestPause/serial/VerifyDeletedResources 0.75
300 TestNetworkPlugins/group/auto/Localhost 0.12
301 TestNetworkPlugins/group/auto/HairPin 0.1
302 TestNetworkPlugins/group/calico/Start 55.84
303 TestNetworkPlugins/group/custom-flannel/Start 54.32
304 TestNetworkPlugins/group/kindnet/Start 42.4
305 TestNetworkPlugins/group/calico/ControllerPod 6.01
306 TestNetworkPlugins/group/calico/KubeletFlags 0.25
307 TestNetworkPlugins/group/calico/NetCatPod 9.19
308 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
309 TestNetworkPlugins/group/calico/DNS 0.12
310 TestNetworkPlugins/group/calico/Localhost 0.1
311 TestNetworkPlugins/group/calico/HairPin 0.11
312 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
313 TestNetworkPlugins/group/kindnet/NetCatPod 9.19
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.26
316 TestNetworkPlugins/group/kindnet/DNS 0.13
317 TestNetworkPlugins/group/kindnet/Localhost 0.13
318 TestNetworkPlugins/group/kindnet/HairPin 0.11
319 TestNetworkPlugins/group/custom-flannel/DNS 0.14
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
322 TestNetworkPlugins/group/flannel/Start 54.13
323 TestNetworkPlugins/group/enable-default-cni/Start 71.38
324 TestNetworkPlugins/group/bridge/Start 69.23
326 TestStartStop/group/old-k8s-version/serial/FirstStart 132.12
327 TestNetworkPlugins/group/flannel/ControllerPod 6.01
328 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
329 TestNetworkPlugins/group/flannel/NetCatPod 9.18
330 TestNetworkPlugins/group/flannel/DNS 0.12
331 TestNetworkPlugins/group/flannel/Localhost 0.11
332 TestNetworkPlugins/group/flannel/HairPin 0.12
333 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
334 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
335 TestNetworkPlugins/group/bridge/NetCatPod 10.22
336 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.25
338 TestStartStop/group/no-preload/serial/FirstStart 63.35
339 TestNetworkPlugins/group/bridge/DNS 0.14
340 TestNetworkPlugins/group/bridge/Localhost 0.13
341 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
342 TestNetworkPlugins/group/bridge/HairPin 0.12
343 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
344 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
346 TestStartStop/group/embed-certs/serial/FirstStart 47.2
348 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.37
349 TestStartStop/group/old-k8s-version/serial/DeployApp 9.39
350 TestStartStop/group/no-preload/serial/DeployApp 8.22
351 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.91
352 TestStartStop/group/old-k8s-version/serial/Stop 12.01
353 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
354 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
355 TestStartStop/group/embed-certs/serial/DeployApp 7.27
356 TestStartStop/group/no-preload/serial/Stop 11.89
357 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
359 TestStartStop/group/embed-certs/serial/Stop 13.19
360 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
361 TestStartStop/group/old-k8s-version/serial/SecondStart 111.18
362 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.6
363 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
364 TestStartStop/group/no-preload/serial/SecondStart 52.73
365 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
366 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
367 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.39
368 TestStartStop/group/embed-certs/serial/SecondStart 50.01
369 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
370 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
373 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
374 TestStartStop/group/no-preload/serial/Pause 2.66
376 TestStartStop/group/newest-cni/serial/FirstStart 30.79
377 TestStartStop/group/newest-cni/serial/DeployApp 0
378 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.85
379 TestStartStop/group/newest-cni/serial/Stop 1.2
380 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
381 TestStartStop/group/newest-cni/serial/SecondStart 13.36
383 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
386 TestStartStop/group/newest-cni/serial/Pause 2.65
390 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
391 TestStartStop/group/embed-certs/serial/Pause 2.75
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.69
394 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
395 TestStartStop/group/old-k8s-version/serial/Pause 2.41
x
+
TestDownloadOnly/v1.20.0/json-events (5.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-029562 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-029562 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.909086951s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0510 16:54:26.267656  729815 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0510 16:54:26.267798  729815 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-029562
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-029562: exit status 85 (65.016301ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-029562 | jenkins | v1.35.0 | 10 May 25 16:54 UTC |          |
	|         | -p download-only-029562        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 16:54:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 16:54:20.404519  729827 out.go:345] Setting OutFile to fd 1 ...
	I0510 16:54:20.404792  729827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 16:54:20.404800  729827 out.go:358] Setting ErrFile to fd 2...
	I0510 16:54:20.404805  729827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 16:54:20.405008  729827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	W0510 16:54:20.405120  729827 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20720-722920/.minikube/config/config.json: open /home/jenkins/minikube-integration/20720-722920/.minikube/config/config.json: no such file or directory
	I0510 16:54:20.405707  729827 out.go:352] Setting JSON to true
	I0510 16:54:20.406671  729827 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9407,"bootTime":1746886653,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 16:54:20.406784  729827 start.go:140] virtualization: kvm guest
	I0510 16:54:20.409628  729827 out.go:97] [download-only-029562] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 16:54:20.409760  729827 notify.go:220] Checking for updates...
	W0510 16:54:20.409789  729827 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball: no such file or directory
	I0510 16:54:20.411317  729827 out.go:169] MINIKUBE_LOCATION=20720
	I0510 16:54:20.413281  729827 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 16:54:20.414824  729827 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 16:54:20.416499  729827 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 16:54:20.417939  729827 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0510 16:54:20.420383  729827 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0510 16:54:20.420653  729827 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 16:54:20.443221  729827 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 16:54:20.443312  729827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 16:54:20.498274  729827 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-05-10 16:54:20.489468228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 16:54:20.498370  729827 docker.go:318] overlay module found
	I0510 16:54:20.499980  729827 out.go:97] Using the docker driver based on user configuration
	I0510 16:54:20.500012  729827 start.go:304] selected driver: docker
	I0510 16:54:20.500022  729827 start.go:908] validating driver "docker" against <nil>
	I0510 16:54:20.500120  729827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 16:54:20.546866  729827 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-05-10 16:54:20.538381371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 16:54:20.547049  729827 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0510 16:54:20.547615  729827 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0510 16:54:20.547750  729827 start_flags.go:957] Wait components to verify : map[apiserver:true system_pods:true]
	I0510 16:54:20.549647  729827 out.go:169] Using Docker driver with root privileges
	I0510 16:54:20.550860  729827 cni.go:84] Creating CNI manager for ""
	I0510 16:54:20.550921  729827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0510 16:54:20.550932  729827 start_flags.go:320] Found "CNI" CNI - setting NetworkPlugin=cni
	I0510 16:54:20.550995  729827 start.go:347] cluster config:
	{Name:download-only-029562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-029562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 16:54:20.552299  729827 out.go:97] Starting "download-only-029562" primary control-plane node in "download-only-029562" cluster
	I0510 16:54:20.552318  729827 cache.go:121] Beginning downloading kic base image for docker with crio
	I0510 16:54:20.553477  729827 out.go:97] Pulling base image v0.0.46-1746731792-20718 ...
	I0510 16:54:20.553501  729827 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0510 16:54:20.553662  729827 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local docker daemon
	I0510 16:54:20.570144  729827 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 to local cache
	I0510 16:54:20.570379  729827 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 in local cache directory
	I0510 16:54:20.570484  729827 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 to local cache
	I0510 16:54:20.585848  729827 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0510 16:54:20.585875  729827 cache.go:56] Caching tarball of preloaded images
	I0510 16:54:20.586026  729827 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0510 16:54:20.587812  729827 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0510 16:54:20.587838  729827 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0510 16:54:20.615465  729827 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0510 16:54:23.799407  729827 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 as a tarball
	I0510 16:54:24.712575  729827 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0510 16:54:24.712679  729827 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-029562 host does not exist
	  To start a cluster, run: "minikube start -p download-only-029562"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-029562
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/json-events (4.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-184104 --force --alsologtostderr --kubernetes-version=v1.33.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-184104 --force --alsologtostderr --kubernetes-version=v1.33.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.15099628s)
--- PASS: TestDownloadOnly/v1.33.0/json-events (4.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/preload-exists
I0510 16:54:30.828622  729815 preload.go:131] Checking if preload exists for k8s version v1.33.0 and runtime crio
I0510 16:54:30.828673  729815 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20720-722920/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.33.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.33.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-184104
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-184104: exit status 85 (63.250256ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-029562 | jenkins | v1.35.0 | 10 May 25 16:54 UTC |                     |
	|         | -p download-only-029562        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 10 May 25 16:54 UTC | 10 May 25 16:54 UTC |
	| delete  | -p download-only-029562        | download-only-029562 | jenkins | v1.35.0 | 10 May 25 16:54 UTC | 10 May 25 16:54 UTC |
	| start   | -o=json --download-only        | download-only-184104 | jenkins | v1.35.0 | 10 May 25 16:54 UTC |                     |
	|         | -p download-only-184104        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.33.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/05/10 16:54:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0510 16:54:26.720137  730169 out.go:345] Setting OutFile to fd 1 ...
	I0510 16:54:26.720398  730169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 16:54:26.720408  730169 out.go:358] Setting ErrFile to fd 2...
	I0510 16:54:26.720413  730169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 16:54:26.720615  730169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 16:54:26.721179  730169 out.go:352] Setting JSON to true
	I0510 16:54:26.722122  730169 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9414,"bootTime":1746886653,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 16:54:26.722243  730169 start.go:140] virtualization: kvm guest
	I0510 16:54:26.724252  730169 out.go:97] [download-only-184104] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 16:54:26.724427  730169 notify.go:220] Checking for updates...
	I0510 16:54:26.725632  730169 out.go:169] MINIKUBE_LOCATION=20720
	I0510 16:54:26.726718  730169 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 16:54:26.727978  730169 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 16:54:26.729042  730169 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 16:54:26.730017  730169 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0510 16:54:26.731972  730169 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0510 16:54:26.732192  730169 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 16:54:26.753816  730169 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 16:54:26.753887  730169 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 16:54:26.807067  730169 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-05-10 16:54:26.795345142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 16:54:26.807235  730169 docker.go:318] overlay module found
	I0510 16:54:26.809041  730169 out.go:97] Using the docker driver based on user configuration
	I0510 16:54:26.809076  730169 start.go:304] selected driver: docker
	I0510 16:54:26.809084  730169 start.go:908] validating driver "docker" against <nil>
	I0510 16:54:26.809180  730169 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 16:54:26.859376  730169 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-05-10 16:54:26.850428012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 16:54:26.859583  730169 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0510 16:54:26.860097  730169 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0510 16:54:26.860265  730169 start_flags.go:957] Wait components to verify : map[apiserver:true system_pods:true]
	I0510 16:54:26.862194  730169 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-184104 host does not exist
	  To start a cluster, run: "minikube start -p download-only-184104"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.33.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.33.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-184104
--- PASS: TestDownloadOnly/v1.33.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.17s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-238188 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-238188" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-238188
--- PASS: TestDownloadOnlyKic (1.17s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I0510 16:54:32.685387  729815 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.33.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.33.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-854589 --alsologtostderr --binary-mirror http://127.0.0.1:37525 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-854589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-854589
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (84.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-603565 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-603565 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m22.38312321s)
helpers_test.go:175: Cleaning up "offline-crio-603565" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-603565
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-603565: (2.577664379s)
--- PASS: TestOffline (84.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-088134
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-088134: exit status 85 (57.01911ms)

                                                
                                                
-- stdout --
	* Profile "addons-088134" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-088134"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-088134
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-088134: exit status 85 (57.714823ms)

                                                
                                                
-- stdout --
	* Profile "addons-088134" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-088134"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (149.08s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-088134 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-088134 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m29.077344459s)
--- PASS: TestAddons/Setup (149.08s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-088134 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-088134 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-088134 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-088134 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [eaaec445-9423-40ba-8faf-485b61a0515d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [eaaec445-9423-40ba-8faf-485b61a0515d] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003621365s
addons_test.go:633: (dbg) Run:  kubectl --context addons-088134 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-088134 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-088134 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.03002ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-694bd45846-pjmvv" [f99f3f51-b2d2-444c-9172-a281336a69ca] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003077922s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2hrkl" [f1926b87-dafd-49dc-a845-8f1b075517f2] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003420207s
addons_test.go:331: (dbg) Run:  kubectl --context addons-088134 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-088134 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-088134 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.168836807s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 ip
2025/05/10 16:57:33 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.94s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dzrbp" [7947fda6-14d1-4b5d-9ffe-19fbbd00baca] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00400108s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-088134 addons disable inspektor-gadget --alsologtostderr -v=1: (5.650151765s)
--- PASS: TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.07s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.420605ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-mj6nz" [c09c57d8-2189-467d-ba7e-6e516538365f] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003066868s
addons_test.go:402: (dbg) Run:  kubectl --context addons-088134 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.07s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-088134 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-088134 --alsologtostderr -v=1: (1.215031492s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-nvjdv" [7402eb57-e84c-4e7a-9e9c-c3abbf3e5ec4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-nvjdv" [7402eb57-e84c-4e7a-9e9c-c3abbf3e5ec4] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004374783s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-088134 addons disable headlamp --alsologtostderr -v=1: (5.833121282s)
--- PASS: TestAddons/parallel/Headlamp (18.05s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-b85f6bbb8-p4h7w" [a4d59af9-ed87-460b-8058-4a7ceae1d6bd] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003620876s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (50.58s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-088134 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-088134 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-088134 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5e96fd38-e6b4-40ff-adc9-d2fa442eebd2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5e96fd38-e6b4-40ff-adc9-d2fa442eebd2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5e96fd38-e6b4-40ff-adc9-d2fa442eebd2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003210512s
addons_test.go:906: (dbg) Run:  kubectl --context addons-088134 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 ssh "cat /opt/local-path-provisioner/pvc-d21bcf7d-7863-46d1-95c2-f7795a677260_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-088134 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-088134 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-088134 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.713824498s)
--- PASS: TestAddons/parallel/LocalPath (50.58s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-slbqt" [a926bab4-4e66-4c98-963e-f41f5ea1fa49] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003691144s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.71s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-946nr" [8117f3a6-2f71-4cba-bee6-0486e7fe3aab] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003349691s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-088134 addons disable yakd --alsologtostderr -v=1: (5.794096949s)
--- PASS: TestAddons/parallel/Yakd (11.80s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-wkh8g" [e739ed11-e98a-4e6d-9105-14c2c5463669] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.00390043s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.14s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-088134
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-088134: (11.887991647s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-088134
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-088134
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-088134
--- PASS: TestAddons/StoppedEnableDisable (12.14s)

                                                
                                    
x
+
TestCertOptions (29.49s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-099653 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-099653 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (26.945225217s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-099653 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-099653 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-099653 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-099653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-099653
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-099653: (1.906784137s)
--- PASS: TestCertOptions (29.49s)

                                                
                                    
x
+
TestCertExpiration (231.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-632402 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-632402 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (33.832450327s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-632402 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-632402 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.942745846s)
helpers_test.go:175: Cleaning up "cert-expiration-632402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-632402
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-632402: (2.584656401s)
--- PASS: TestCertExpiration (231.36s)

                                                
                                    
x
+
TestForceSystemdFlag (39.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-981635 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-981635 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.852560625s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-981635 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-981635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-981635
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-981635: (5.563513856s)
--- PASS: TestForceSystemdFlag (39.74s)

                                                
                                    
x
+
TestForceSystemdEnv (36.18s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-718671 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-718671 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.125110647s)
helpers_test.go:175: Cleaning up "force-systemd-env-718671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-718671
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-718671: (4.053469604s)
--- PASS: TestForceSystemdEnv (36.18s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.53s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0510 17:44:10.597357  729815 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0510 17:44:10.597495  729815 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0510 17:44:10.636737  729815 install.go:62] docker-machine-driver-kvm2: exit status 1
W0510 17:44:10.636890  729815 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0510 17:44:10.636948  729815 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2791158772/001/docker-machine-driver-kvm2
I0510 17:44:10.924878  729815 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2791158772/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x557a960 0x557a960 0x557a960 0x557a960 0x557a960 0x557a960 0x557a960] Decompressors:map[bz2:0xc0005c8fe0 gz:0xc0005c8fe8 tar:0xc0005c8f70 tar.bz2:0xc0005c8f80 tar.gz:0xc0005c8fa0 tar.xz:0xc0005c8fc0 tar.zst:0xc0005c8fd0 tbz2:0xc0005c8f80 tgz:0xc0005c8fa0 txz:0xc0005c8fc0 tzst:0xc0005c8fd0 xz:0xc0005c8ff0 zip:0xc0005c9000 zst:0xc0005c8ff8] Getters:map[file:0xc001a35850 http:0xc0004313b0 https:0xc000431400] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0510 17:44:10.924941  729815 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2791158772/001/docker-machine-driver-kvm2
I0510 17:44:12.534797  729815 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0510 17:44:12.534889  729815 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0510 17:44:12.603817  729815 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0510 17:44:12.603856  729815 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0510 17:44:12.603935  729815 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0510 17:44:12.603969  729815 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2791158772/002/docker-machine-driver-kvm2
I0510 17:44:12.770092  729815 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2791158772/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x557a960 0x557a960 0x557a960 0x557a960 0x557a960 0x557a960 0x557a960] Decompressors:map[bz2:0xc0005c8fe0 gz:0xc0005c8fe8 tar:0xc0005c8f70 tar.bz2:0xc0005c8f80 tar.gz:0xc0005c8fa0 tar.xz:0xc0005c8fc0 tar.zst:0xc0005c8fd0 tbz2:0xc0005c8f80 tgz:0xc0005c8fa0 txz:0xc0005c8fc0 tzst:0xc0005c8fd0 xz:0xc0005c8ff0 zip:0xc0005c9000 zst:0xc0005c8ff8] Getters:map[file:0xc001f8b560 http:0xc00088d770 https:0xc00088d7c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0510 17:44:12.770154  729815 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2791158772/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.53s)

                                                
                                    
x
+
TestErrorSpam/setup (25.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-982017 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-982017 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-982017 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-982017 --driver=docker  --container-runtime=crio: (25.038349843s)
--- PASS: TestErrorSpam/setup (25.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 start --dry-run
--- PASS: TestErrorSpam/start (0.58s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 stop: (1.23617772s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-982017 --log_dir /tmp/nospam-982017 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20720-722920/.minikube/files/etc/test/nested/copy/729815/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (70.2s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-914764 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0510 17:07:03.268521  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:07:03.274953  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:07:03.286429  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:07:03.307948  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:07:03.349372  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:07:03.430903  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:07:03.592486  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:07:03.914733  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:07:04.556649  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:07:05.838055  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:07:08.399721  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:07:13.521964  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:07:23.764210  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:07:44.245859  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-914764 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m10.203436812s)
--- PASS: TestFunctional/serial/StartWithProxy (70.20s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (24.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0510 17:07:54.192503  729815 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-914764 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-914764 --alsologtostderr -v=8: (24.63539542s)
functional_test.go:680: soft start took 24.636137262s for "functional-914764" cluster.
I0510 17:08:18.828281  729815 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestFunctional/serial/SoftStart (24.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-914764 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-914764 cache add registry.k8s.io/pause:3.1: (1.011469769s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-914764 cache add registry.k8s.io/pause:3.3: (1.107729594s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-914764 /tmp/TestFunctionalserialCacheCmdcacheadd_local2090315407/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 cache add minikube-local-cache-test:functional-914764
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 cache delete minikube-local-cache-test:functional-914764
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-914764
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-914764 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (263.901571ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0510 17:08:25.208192  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 kubectl -- --context functional-914764 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-914764 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-914764 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-914764 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.269002234s)
functional_test.go:778: restart took 32.269173154s for "functional-914764" cluster.
I0510 17:08:57.993004  729815 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestFunctional/serial/ExtraConfig (32.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-914764 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-914764 logs: (1.365335913s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 logs --file /tmp/TestFunctionalserialLogsFileCmd4162178072/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-914764 logs --file /tmp/TestFunctionalserialLogsFileCmd4162178072/001/logs.txt: (1.383963595s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-914764 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-914764
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-914764: exit status 115 (320.200697ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32476 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-914764 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-914764 config get cpus: exit status 14 (90.911509ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-914764 config get cpus: exit status 14 (64.662874ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-914764 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-914764 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (223.562627ms)

                                                
                                                
-- stdout --
	* [functional-914764] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 17:09:06.825923  764850 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:09:06.826060  764850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:09:06.826070  764850 out.go:358] Setting ErrFile to fd 2...
	I0510 17:09:06.826075  764850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:09:06.826299  764850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:09:06.826856  764850 out.go:352] Setting JSON to false
	I0510 17:09:06.828040  764850 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10294,"bootTime":1746886653,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:09:06.828141  764850 start.go:140] virtualization: kvm guest
	I0510 17:09:06.830227  764850 out.go:177] * [functional-914764] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:09:06.832040  764850 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:09:06.832075  764850 notify.go:220] Checking for updates...
	I0510 17:09:06.834841  764850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:09:06.836303  764850 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:09:06.837567  764850 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 17:09:06.838888  764850 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:09:06.840094  764850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:09:06.841756  764850 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:09:06.842346  764850 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:09:06.889229  764850 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 17:09:06.889378  764850 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:09:06.952501  764850 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:55 SystemTime:2025-05-10 17:09:06.937998436 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:09:06.952610  764850 docker.go:318] overlay module found
	I0510 17:09:06.954922  764850 out.go:177] * Using the docker driver based on existing profile
	I0510 17:09:06.956470  764850 start.go:304] selected driver: docker
	I0510 17:09:06.956489  764850 start.go:908] validating driver "docker" against &{Name:functional-914764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-914764 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:09:06.956564  764850 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:09:06.959010  764850 out.go:201] 
	W0510 17:09:06.960331  764850 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0510 17:09:06.961767  764850 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-914764 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-914764 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-914764 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (239.428424ms)

                                                
                                                
-- stdout --
	* [functional-914764] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 17:09:06.974231  764930 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:09:06.974612  764930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:09:06.974624  764930 out.go:358] Setting ErrFile to fd 2...
	I0510 17:09:06.974631  764930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:09:06.975109  764930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:09:06.975867  764930 out.go:352] Setting JSON to false
	I0510 17:09:06.977226  764930 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10294,"bootTime":1746886653,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:09:06.977327  764930 start.go:140] virtualization: kvm guest
	I0510 17:09:06.979805  764930 out.go:177] * [functional-914764] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0510 17:09:06.981503  764930 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:09:06.982521  764930 notify.go:220] Checking for updates...
	I0510 17:09:06.986156  764930 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:09:06.987429  764930 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:09:06.988899  764930 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 17:09:06.990462  764930 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:09:06.992736  764930 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:09:06.994925  764930 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:09:06.995691  764930 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:09:07.020812  764930 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 17:09:07.020967  764930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:09:07.093709  764930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-05-10 17:09:07.082073403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:09:07.093867  764930 docker.go:318] overlay module found
	I0510 17:09:07.096931  764930 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0510 17:09:07.098502  764930 start.go:304] selected driver: docker
	I0510 17:09:07.098523  764930 start.go:908] validating driver "docker" against &{Name:functional-914764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1746731792-20718@sha256:074d9afa1e8827ea0e101248fc55098d304814b5d8bf485882a81afc90084155 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.0 ClusterName:functional-914764 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.33.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0510 17:09:07.098633  764930 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:09:07.101636  764930 out.go:201] 
	W0510 17:09:07.103269  764930 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0510 17:09:07.104409  764930 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-914764 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-914764 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-qpwnx" [a4043209-9f40-4d30-a859-082ebbb6ca57] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-qpwnx" [a4043209-9f40-4d30-a859-082ebbb6ca57] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003541641s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:32065
functional_test.go:1692: http://192.168.49.2:32065: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-qpwnx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32065
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh -n functional-914764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 cp functional-914764:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1464626511/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh -n functional-914764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh -n functional-914764 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/729815/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "sudo cat /etc/test/nested/copy/729815/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/729815.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "sudo cat /etc/ssl/certs/729815.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/729815.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "sudo cat /usr/share/ca-certificates/729815.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/7298152.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "sudo cat /etc/ssl/certs/7298152.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/7298152.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "sudo cat /usr/share/ca-certificates/7298152.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-914764 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-914764 ssh "sudo systemctl is-active docker": exit status 1 (283.699596ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-914764 ssh "sudo systemctl is-active containerd": exit status 1 (293.939353ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-914764 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-914764 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-2w246" [65cf67e1-9a76-4e4e-bc1a-c463603b72af] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-2w246" [65cf67e1-9a76-4e4e-bc1a-c463603b72af] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004975054s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "404.326734ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "109.00296ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-914764 /tmp/TestFunctionalparallelMountCmdany-port1424139336/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1746896945960141278" to /tmp/TestFunctionalparallelMountCmdany-port1424139336/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1746896945960141278" to /tmp/TestFunctionalparallelMountCmdany-port1424139336/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1746896945960141278" to /tmp/TestFunctionalparallelMountCmdany-port1424139336/001/test-1746896945960141278
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-914764 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (402.029115ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0510 17:09:06.362498  729815 retry.go:31] will retry after 375.674404ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 10 17:09 created-by-test
-rw-r--r-- 1 docker docker 24 May 10 17:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 10 17:09 test-1746896945960141278
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh cat /mount-9p/test-1746896945960141278
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-914764 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [21acef18-51e6-4c81-b13f-6cdcf9c7eade] Pending
helpers_test.go:344: "busybox-mount" [21acef18-51e6-4c81-b13f-6cdcf9c7eade] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [21acef18-51e6-4c81-b13f-6cdcf9c7eade] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [21acef18-51e6-4c81-b13f-6cdcf9c7eade] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003534578s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-914764 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-914764 /tmp/TestFunctionalparallelMountCmdany-port1424139336/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.87s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "474.60285ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "49.9046ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-914764 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.33.0
registry.k8s.io/kube-proxy:v1.33.0
registry.k8s.io/kube-controller-manager:v1.33.0
registry.k8s.io/kube-apiserver:v1.33.0
registry.k8s.io/etcd:3.5.21-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.12.0
localhost/minikube-local-cache-test:functional-914764
localhost/kicbase/echo-server:functional-914764
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250214-acbabc1a
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-914764 image ls --format short --alsologtostderr:
I0510 17:12:25.128306  772493 out.go:345] Setting OutFile to fd 1 ...
I0510 17:12:25.128572  772493 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:12:25.128581  772493 out.go:358] Setting ErrFile to fd 2...
I0510 17:12:25.128585  772493 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:12:25.128759  772493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
I0510 17:12:25.129327  772493 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 17:12:25.129424  772493 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 17:12:25.129782  772493 cli_runner.go:164] Run: docker container inspect functional-914764 --format={{.State.Status}}
I0510 17:12:25.148563  772493 ssh_runner.go:195] Run: systemctl --version
I0510 17:12:25.148632  772493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-914764
I0510 17:12:25.165338  772493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/functional-914764/id_rsa Username:docker}
I0510 17:12:25.247929  772493 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-914764 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/kicbase/echo-server           | functional-914764  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-controller-manager | v1.33.0            | 1d579cb6d6967 | 95.7MB |
| registry.k8s.io/kube-proxy              | v1.33.0            | f1184a0bd7fe5 | 99.1MB |
| docker.io/kindest/kindnetd              | v20250214-acbabc1a | df3849d954c98 | 95.7MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.21-0           | 499038711c081 | 154MB  |
| registry.k8s.io/kube-apiserver          | v1.33.0            | 6ba9545b2183e | 103MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-914764  | ac7eb6b3defd8 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.12.0            | 1cf5f116067c6 | 71.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.33.0            | 8d72586a76469 | 74.5MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-914764 image ls --format table --alsologtostderr:
I0510 17:12:25.546680  772593 out.go:345] Setting OutFile to fd 1 ...
I0510 17:12:25.546928  772593 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:12:25.546937  772593 out.go:358] Setting ErrFile to fd 2...
I0510 17:12:25.546942  772593 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:12:25.547215  772593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
I0510 17:12:25.547836  772593 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 17:12:25.547925  772593 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 17:12:25.548266  772593 cli_runner.go:164] Run: docker container inspect functional-914764 --format={{.State.Status}}
I0510 17:12:25.564939  772593 ssh_runner.go:195] Run: systemctl --version
I0510 17:12:25.564985  772593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-914764
I0510 17:12:25.581762  772593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/functional-914764/id_rsa Username:docker}
I0510 17:12:25.664069  772593 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-914764 image ls --format json --alsologtostderr:
[{"id":"df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f","repoDigests":["docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495","docker.io/kindest/kindnetd@sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97"],"repoTags":["docker.io/kindest/kindnetd:v20250214-acbabc1a"],"size":"95703604"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1","repoDigests":["registry.k8s.io/etcd@sha256:21d2177d708b53ac0fbd1c073c334d58f913eb75da293ff086610e61af03630a","registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121"
],"repoTags":["registry.k8s.io/etcd:3.5.21-0"],"size":"154190592"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5
bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-914764"],"size":"4943877"},{"id":"ac7eb6b3defd8a7b8cf2be1639cea52a5ecb609130f798e18b4c87190cc6294c","repoDigests":["localhost/minikube-local-cache-test@sha256:623b8989541a92176932db7d38dc7836fa5dc4d56f4936fa860192c37e1fd247"],"repoTags":["localhost/minikub
e-local-cache-test:functional-914764"],"size":"3330"},{"id":"1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b","repoDigests":["registry.k8s.io/coredns/coredns@sha256:2324f485c8db937628a18c293d946327f3a7229b9f77213e8f2256f0b616a4ee","registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.0"],"size":"71169915"},{"id":"1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9de627a31852175b8308cb7c8d92f15365672f6bf26026719cc1c05a03580bc4","registry.k8s.io/kube-controller-manager@sha256:f0b32ab11fd06504608cdb9084f7284106b4f5f07f35eb8823e70ea0eaaf252a"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.33.0"],"size":"95653192"},{"id":"f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68","repoDigests":["registry.k8s.io/kube-proxy@sha256:05f8984642d05b1b1a6c37605a4a566e46e7290f9291d17885f096c36861095b","regist
ry.k8s.io/kube-proxy@sha256:32b893c37d363b18711b397f6ccb29655e3d08183d410f1a93ad298992c9ea7e"],"repoTags":["registry.k8s.io/kube-proxy:v1.33.0"],"size":"99145113"},{"id":"8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8dd2fbeb7f711da53a89ded239e54133f34110d98de887a39a9021e651b51f1f","registry.k8s.io/kube-scheduler@sha256:b375b81c7f253be3f093232650b153288e7f90be3d02a025fd602b4b40fd95c5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.33.0"],"size":"74501448"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:6679a9970a8b2f18647b33bf02e5e9895d286689256e2f7172481b4096e46a32","registry.k8s.io/kube-apiserver@sha256:6c0f4ade3e5a3
4d8791a48671b127a00dc114e84b70ec4d92e586c17d68a1ca6"],"repoTags":["registry.k8s.io/kube-apiserver:v1.33.0"],"size":"102858210"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-914764 image ls --format json --alsologtostderr:
I0510 17:12:25.337111  772543 out.go:345] Setting OutFile to fd 1 ...
I0510 17:12:25.337261  772543 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:12:25.337272  772543 out.go:358] Setting ErrFile to fd 2...
I0510 17:12:25.337279  772543 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:12:25.337536  772543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
I0510 17:12:25.338136  772543 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 17:12:25.338277  772543 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 17:12:25.338694  772543 cli_runner.go:164] Run: docker container inspect functional-914764 --format={{.State.Status}}
I0510 17:12:25.356875  772543 ssh_runner.go:195] Run: systemctl --version
I0510 17:12:25.356934  772543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-914764
I0510 17:12:25.374005  772543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/functional-914764/id_rsa Username:docker}
I0510 17:12:25.460141  772543 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-914764 image ls --format yaml --alsologtostderr:
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:6679a9970a8b2f18647b33bf02e5e9895d286689256e2f7172481b4096e46a32
- registry.k8s.io/kube-apiserver@sha256:6c0f4ade3e5a34d8791a48671b127a00dc114e84b70ec4d92e586c17d68a1ca6
repoTags:
- registry.k8s.io/kube-apiserver:v1.33.0
size: "102858210"
- id: 499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1
repoDigests:
- registry.k8s.io/etcd@sha256:21d2177d708b53ac0fbd1c073c334d58f913eb75da293ff086610e61af03630a
- registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121
repoTags:
- registry.k8s.io/etcd:3.5.21-0
size: "154190592"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: ac7eb6b3defd8a7b8cf2be1639cea52a5ecb609130f798e18b4c87190cc6294c
repoDigests:
- localhost/minikube-local-cache-test@sha256:623b8989541a92176932db7d38dc7836fa5dc4d56f4936fa860192c37e1fd247
repoTags:
- localhost/minikube-local-cache-test:functional-914764
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f
repoDigests:
- docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495
- docker.io/kindest/kindnetd@sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97
repoTags:
- docker.io/kindest/kindnetd:v20250214-acbabc1a
size: "95703604"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-914764
size: "4943877"
- id: 1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:2324f485c8db937628a18c293d946327f3a7229b9f77213e8f2256f0b616a4ee
- registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.0
size: "71169915"
- id: 1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9de627a31852175b8308cb7c8d92f15365672f6bf26026719cc1c05a03580bc4
- registry.k8s.io/kube-controller-manager@sha256:f0b32ab11fd06504608cdb9084f7284106b4f5f07f35eb8823e70ea0eaaf252a
repoTags:
- registry.k8s.io/kube-controller-manager:v1.33.0
size: "95653192"
- id: f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68
repoDigests:
- registry.k8s.io/kube-proxy@sha256:05f8984642d05b1b1a6c37605a4a566e46e7290f9291d17885f096c36861095b
- registry.k8s.io/kube-proxy@sha256:32b893c37d363b18711b397f6ccb29655e3d08183d410f1a93ad298992c9ea7e
repoTags:
- registry.k8s.io/kube-proxy:v1.33.0
size: "99145113"
- id: 8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8dd2fbeb7f711da53a89ded239e54133f34110d98de887a39a9021e651b51f1f
- registry.k8s.io/kube-scheduler@sha256:b375b81c7f253be3f093232650b153288e7f90be3d02a025fd602b4b40fd95c5
repoTags:
- registry.k8s.io/kube-scheduler:v1.33.0
size: "74501448"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-914764 image ls --format yaml --alsologtostderr:
I0510 17:12:25.754430  772643 out.go:345] Setting OutFile to fd 1 ...
I0510 17:12:25.754684  772643 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:12:25.754693  772643 out.go:358] Setting ErrFile to fd 2...
I0510 17:12:25.754697  772643 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:12:25.754909  772643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
I0510 17:12:25.755515  772643 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 17:12:25.755616  772643 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 17:12:25.755986  772643 cli_runner.go:164] Run: docker container inspect functional-914764 --format={{.State.Status}}
I0510 17:12:25.774105  772643 ssh_runner.go:195] Run: systemctl --version
I0510 17:12:25.774158  772643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-914764
I0510 17:12:25.790731  772643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/functional-914764/id_rsa Username:docker}
I0510 17:12:25.876040  772643 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-914764 ssh pgrep buildkitd: exit status 1 (240.857194ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image build -t localhost/my-image:functional-914764 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-914764 image build -t localhost/my-image:functional-914764 testdata/build --alsologtostderr: (1.590953886s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-914764 image build -t localhost/my-image:functional-914764 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3d8f6beab15
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-914764
--> 83696c77c32
Successfully tagged localhost/my-image:functional-914764
83696c77c32614123a6aaeaa9127d262dd69b687d4264a3b0568b67aac34ffcb
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-914764 image build -t localhost/my-image:functional-914764 testdata/build --alsologtostderr:
I0510 17:12:26.202770  772786 out.go:345] Setting OutFile to fd 1 ...
I0510 17:12:26.203683  772786 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:12:26.203702  772786 out.go:358] Setting ErrFile to fd 2...
I0510 17:12:26.203706  772786 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0510 17:12:26.203914  772786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
I0510 17:12:26.204497  772786 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 17:12:26.205068  772786 config.go:182] Loaded profile config "functional-914764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
I0510 17:12:26.205449  772786 cli_runner.go:164] Run: docker container inspect functional-914764 --format={{.State.Status}}
I0510 17:12:26.222855  772786 ssh_runner.go:195] Run: systemctl --version
I0510 17:12:26.222916  772786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-914764
I0510 17:12:26.239887  772786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/functional-914764/id_rsa Username:docker}
I0510 17:12:26.328381  772786 build_images.go:161] Building image from path: /tmp/build.3457939793.tar
I0510 17:12:26.328454  772786 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0510 17:12:26.336911  772786 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3457939793.tar
I0510 17:12:26.340176  772786 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3457939793.tar: stat -c "%s %y" /var/lib/minikube/build/build.3457939793.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3457939793.tar': No such file or directory
I0510 17:12:26.340206  772786 ssh_runner.go:362] scp /tmp/build.3457939793.tar --> /var/lib/minikube/build/build.3457939793.tar (3072 bytes)
I0510 17:12:26.362658  772786 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3457939793
I0510 17:12:26.370842  772786 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3457939793 -xf /var/lib/minikube/build/build.3457939793.tar
I0510 17:12:26.379316  772786 crio.go:315] Building image: /var/lib/minikube/build/build.3457939793
I0510 17:12:26.379383  772786 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-914764 /var/lib/minikube/build/build.3457939793 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0510 17:12:27.723896  772786 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-914764 /var/lib/minikube/build/build.3457939793 --cgroup-manager=cgroupfs: (1.344487157s)
I0510 17:12:27.723970  772786 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3457939793
I0510 17:12:27.733377  772786 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3457939793.tar
I0510 17:12:27.741508  772786 build_images.go:217] Built localhost/my-image:functional-914764 from /tmp/build.3457939793.tar
I0510 17:12:27.741552  772786 build_images.go:133] succeeded building to: functional-914764
I0510 17:12:27.741559  772786 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-914764
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image load --daemon kicbase/echo-server:functional-914764 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image load --daemon kicbase/echo-server:functional-914764 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-914764
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image load --daemon kicbase/echo-server:functional-914764 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image save kicbase/echo-server:functional-914764 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image rm kicbase/echo-server:functional-914764 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-914764 /tmp/TestFunctionalparallelMountCmdspecific-port835268472/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-914764 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (288.185478ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0510 17:09:13.118873  729815 retry.go:31] will retry after 567.692515ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-914764 /tmp/TestFunctionalparallelMountCmdspecific-port835268472/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-914764 ssh "sudo umount -f /mount-9p": exit status 1 (293.228894ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-914764 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-914764 /tmp/TestFunctionalparallelMountCmdspecific-port835268472/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-914764
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 image save --daemon kicbase/echo-server:functional-914764 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-914764
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 service list -o json
functional_test.go:1511: Took "329.66102ms" to run "out/minikube-linux-amd64 -p functional-914764 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-914764 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-914764 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-914764 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-914764 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 768404: os: process already finished
helpers_test.go:502: unable to terminate pid 767960: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:31121
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-914764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup620297487/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-914764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup620297487/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-914764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup620297487/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-914764 ssh "findmnt -T" /mount1: exit status 1 (364.418338ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0510 17:09:15.088004  729815 retry.go:31] will retry after 386.253433ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-914764 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-914764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup620297487/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-914764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup620297487/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-914764 /tmp/TestFunctionalparallelMountCmdVerifyCleanup620297487/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-914764 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:31121
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 update-context --alsologtostderr -v=2
E0510 17:12:30.972114  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-914764 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-914764 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
E0510 17:17:03.264909  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-914764
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-914764
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-914764
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (156.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 start --ha --memory 2200 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-043378 start --ha --memory 2200 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m36.129698176s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (156.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-043378 kubectl -- rollout status deployment/busybox: (2.072425676s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- exec busybox-58667487b6-84kkd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- exec busybox-58667487b6-l9src -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- exec busybox-58667487b6-rrl7q -- nslookup kubernetes.io
E0510 17:22:03.265263  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- exec busybox-58667487b6-84kkd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- exec busybox-58667487b6-l9src -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- exec busybox-58667487b6-rrl7q -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- exec busybox-58667487b6-84kkd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- exec busybox-58667487b6-l9src -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- exec busybox-58667487b6-rrl7q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- exec busybox-58667487b6-84kkd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- exec busybox-58667487b6-84kkd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- exec busybox-58667487b6-l9src -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- exec busybox-58667487b6-l9src -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- exec busybox-58667487b6-rrl7q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 kubectl -- exec busybox-58667487b6-rrl7q -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-043378 node add --alsologtostderr -v 5: (23.191269567s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-043378 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 status --output json --alsologtostderr -v 5
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp testdata/cp-test.txt ha-043378:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3478097529/001/cp-test_ha-043378.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378:/home/docker/cp-test.txt ha-043378-m02:/home/docker/cp-test_ha-043378_ha-043378-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m02 "sudo cat /home/docker/cp-test_ha-043378_ha-043378-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378:/home/docker/cp-test.txt ha-043378-m03:/home/docker/cp-test_ha-043378_ha-043378-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m03 "sudo cat /home/docker/cp-test_ha-043378_ha-043378-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378:/home/docker/cp-test.txt ha-043378-m04:/home/docker/cp-test_ha-043378_ha-043378-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m04 "sudo cat /home/docker/cp-test_ha-043378_ha-043378-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp testdata/cp-test.txt ha-043378-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3478097529/001/cp-test_ha-043378-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378-m02:/home/docker/cp-test.txt ha-043378:/home/docker/cp-test_ha-043378-m02_ha-043378.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378 "sudo cat /home/docker/cp-test_ha-043378-m02_ha-043378.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378-m02:/home/docker/cp-test.txt ha-043378-m03:/home/docker/cp-test_ha-043378-m02_ha-043378-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m03 "sudo cat /home/docker/cp-test_ha-043378-m02_ha-043378-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378-m02:/home/docker/cp-test.txt ha-043378-m04:/home/docker/cp-test_ha-043378-m02_ha-043378-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m04 "sudo cat /home/docker/cp-test_ha-043378-m02_ha-043378-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp testdata/cp-test.txt ha-043378-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3478097529/001/cp-test_ha-043378-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378-m03:/home/docker/cp-test.txt ha-043378:/home/docker/cp-test_ha-043378-m03_ha-043378.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378 "sudo cat /home/docker/cp-test_ha-043378-m03_ha-043378.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378-m03:/home/docker/cp-test.txt ha-043378-m02:/home/docker/cp-test_ha-043378-m03_ha-043378-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m02 "sudo cat /home/docker/cp-test_ha-043378-m03_ha-043378-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378-m03:/home/docker/cp-test.txt ha-043378-m04:/home/docker/cp-test_ha-043378-m03_ha-043378-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m04 "sudo cat /home/docker/cp-test_ha-043378-m03_ha-043378-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp testdata/cp-test.txt ha-043378-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3478097529/001/cp-test_ha-043378-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378-m04:/home/docker/cp-test.txt ha-043378:/home/docker/cp-test_ha-043378-m04_ha-043378.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378 "sudo cat /home/docker/cp-test_ha-043378-m04_ha-043378.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378-m04:/home/docker/cp-test.txt ha-043378-m02:/home/docker/cp-test_ha-043378-m04_ha-043378-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m02 "sudo cat /home/docker/cp-test_ha-043378-m04_ha-043378-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 cp ha-043378-m04:/home/docker/cp-test.txt ha-043378-m03:/home/docker/cp-test_ha-043378-m04_ha-043378-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 ssh -n ha-043378-m03 "sudo cat /home/docker/cp-test_ha-043378-m04_ha-043378-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-043378 node stop m02 --alsologtostderr -v 5: (11.882946844s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-043378 status --alsologtostderr -v 5: exit status 7 (652.228071ms)

                                                
                                                
-- stdout --
	ha-043378
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-043378-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-043378-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-043378-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 17:22:58.259629  797708 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:22:58.259749  797708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:22:58.259758  797708 out.go:358] Setting ErrFile to fd 2...
	I0510 17:22:58.259763  797708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:22:58.260024  797708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:22:58.260234  797708 out.go:352] Setting JSON to false
	I0510 17:22:58.260279  797708 mustload.go:65] Loading cluster: ha-043378
	I0510 17:22:58.260378  797708 notify.go:220] Checking for updates...
	I0510 17:22:58.260780  797708 config.go:182] Loaded profile config "ha-043378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:22:58.260804  797708 status.go:174] checking status of ha-043378 ...
	I0510 17:22:58.261277  797708 cli_runner.go:164] Run: docker container inspect ha-043378 --format={{.State.Status}}
	I0510 17:22:58.279993  797708 status.go:371] ha-043378 host status = "Running" (err=<nil>)
	I0510 17:22:58.280023  797708 host.go:66] Checking if "ha-043378" exists ...
	I0510 17:22:58.280299  797708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-043378
	I0510 17:22:58.298214  797708 host.go:66] Checking if "ha-043378" exists ...
	I0510 17:22:58.298538  797708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 17:22:58.298580  797708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-043378
	I0510 17:22:58.316748  797708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/ha-043378/id_rsa Username:docker}
	I0510 17:22:58.413467  797708 ssh_runner.go:195] Run: systemctl --version
	I0510 17:22:58.417824  797708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 17:22:58.428433  797708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:22:58.479864  797708 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-05-10 17:22:58.469998917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:22:58.480382  797708 kubeconfig.go:125] found "ha-043378" server: "https://192.168.49.254:8443"
	I0510 17:22:58.480422  797708 api_server.go:166] Checking apiserver status ...
	I0510 17:22:58.480466  797708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 17:22:58.491741  797708 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1591/cgroup
	I0510 17:22:58.500836  797708 api_server.go:182] apiserver freezer: "11:freezer:/docker/8d83e5889086617a90877a9e2e11e12a31db0daff1a3e11f71c6b5d541d3be26/crio/crio-d6d6d0237fdf919a21cb9f6829bd3b3525c2e3e0e7e638206caf874760befe90"
	I0510 17:22:58.500907  797708 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8d83e5889086617a90877a9e2e11e12a31db0daff1a3e11f71c6b5d541d3be26/crio/crio-d6d6d0237fdf919a21cb9f6829bd3b3525c2e3e0e7e638206caf874760befe90/freezer.state
	I0510 17:22:58.509262  797708 api_server.go:204] freezer state: "THAWED"
	I0510 17:22:58.509293  797708 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0510 17:22:58.513092  797708 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0510 17:22:58.513118  797708 status.go:463] ha-043378 apiserver status = Running (err=<nil>)
	I0510 17:22:58.513129  797708 status.go:176] ha-043378 status: &{Name:ha-043378 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 17:22:58.513146  797708 status.go:174] checking status of ha-043378-m02 ...
	I0510 17:22:58.513381  797708 cli_runner.go:164] Run: docker container inspect ha-043378-m02 --format={{.State.Status}}
	I0510 17:22:58.531295  797708 status.go:371] ha-043378-m02 host status = "Stopped" (err=<nil>)
	I0510 17:22:58.531323  797708 status.go:384] host is not running, skipping remaining checks
	I0510 17:22:58.531330  797708 status.go:176] ha-043378-m02 status: &{Name:ha-043378-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 17:22:58.531355  797708 status.go:174] checking status of ha-043378-m03 ...
	I0510 17:22:58.531757  797708 cli_runner.go:164] Run: docker container inspect ha-043378-m03 --format={{.State.Status}}
	I0510 17:22:58.548487  797708 status.go:371] ha-043378-m03 host status = "Running" (err=<nil>)
	I0510 17:22:58.548514  797708 host.go:66] Checking if "ha-043378-m03" exists ...
	I0510 17:22:58.548827  797708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-043378-m03
	I0510 17:22:58.565849  797708 host.go:66] Checking if "ha-043378-m03" exists ...
	I0510 17:22:58.566099  797708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 17:22:58.566135  797708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-043378-m03
	I0510 17:22:58.583557  797708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/ha-043378-m03/id_rsa Username:docker}
	I0510 17:22:58.668634  797708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 17:22:58.680040  797708 kubeconfig.go:125] found "ha-043378" server: "https://192.168.49.254:8443"
	I0510 17:22:58.680069  797708 api_server.go:166] Checking apiserver status ...
	I0510 17:22:58.680099  797708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 17:22:58.690106  797708 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	I0510 17:22:58.698953  797708 api_server.go:182] apiserver freezer: "11:freezer:/docker/c41a00c2042843bc527d054d4a37f20df45ffde8594615d592d7be90e967f07b/crio/crio-b4c79471f5b5ff20b14ebef4a47b7875f3c1ce9aaa976817a6ec14f3504050b5"
	I0510 17:22:58.699017  797708 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c41a00c2042843bc527d054d4a37f20df45ffde8594615d592d7be90e967f07b/crio/crio-b4c79471f5b5ff20b14ebef4a47b7875f3c1ce9aaa976817a6ec14f3504050b5/freezer.state
	I0510 17:22:58.706702  797708 api_server.go:204] freezer state: "THAWED"
	I0510 17:22:58.706819  797708 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0510 17:22:58.710735  797708 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0510 17:22:58.710757  797708 status.go:463] ha-043378-m03 apiserver status = Running (err=<nil>)
	I0510 17:22:58.710766  797708 status.go:176] ha-043378-m03 status: &{Name:ha-043378-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 17:22:58.710782  797708 status.go:174] checking status of ha-043378-m04 ...
	I0510 17:22:58.711030  797708 cli_runner.go:164] Run: docker container inspect ha-043378-m04 --format={{.State.Status}}
	I0510 17:22:58.727925  797708 status.go:371] ha-043378-m04 host status = "Running" (err=<nil>)
	I0510 17:22:58.727956  797708 host.go:66] Checking if "ha-043378-m04" exists ...
	I0510 17:22:58.728287  797708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-043378-m04
	I0510 17:22:58.746594  797708 host.go:66] Checking if "ha-043378-m04" exists ...
	I0510 17:22:58.746843  797708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 17:22:58.746878  797708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-043378-m04
	I0510 17:22:58.764778  797708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/ha-043378-m04/id_rsa Username:docker}
	I0510 17:22:58.848505  797708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 17:22:58.860171  797708 status.go:176] ha-043378-m04 status: &{Name:ha-043378-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-043378 node start m02 --alsologtostderr -v 5: (19.377200305s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (116.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 stop --alsologtostderr -v 5
E0510 17:23:26.333649  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-043378 stop --alsologtostderr -v 5: (26.081613231s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 start --wait true --alsologtostderr -v 5
E0510 17:24:05.059581  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:24:05.066045  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:24:05.077487  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:24:05.098951  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:24:05.140436  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:24:05.221749  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:24:05.383381  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:24:05.705139  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:24:06.347209  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:24:07.628875  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:24:10.191627  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:24:15.312950  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:24:25.555204  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:24:46.036579  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-043378 start --wait true --alsologtostderr -v 5: (1m30.262748573s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (116.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 node delete m03 --alsologtostderr -v 5
E0510 17:25:26.997959  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-043378 node delete m03 --alsologtostderr -v 5: (11.50025661s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-043378 stop --alsologtostderr -v 5: (35.477749455s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-043378 status --alsologtostderr -v 5: exit status 7 (104.766799ms)

                                                
                                                
-- stdout --
	ha-043378
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-043378-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-043378-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 17:26:05.575570  814391 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:26:05.575712  814391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:26:05.575720  814391 out.go:358] Setting ErrFile to fd 2...
	I0510 17:26:05.575724  814391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:26:05.575931  814391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:26:05.576118  814391 out.go:352] Setting JSON to false
	I0510 17:26:05.576158  814391 mustload.go:65] Loading cluster: ha-043378
	I0510 17:26:05.576251  814391 notify.go:220] Checking for updates...
	I0510 17:26:05.576589  814391 config.go:182] Loaded profile config "ha-043378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:26:05.576617  814391 status.go:174] checking status of ha-043378 ...
	I0510 17:26:05.577126  814391 cli_runner.go:164] Run: docker container inspect ha-043378 --format={{.State.Status}}
	I0510 17:26:05.594912  814391 status.go:371] ha-043378 host status = "Stopped" (err=<nil>)
	I0510 17:26:05.594935  814391 status.go:384] host is not running, skipping remaining checks
	I0510 17:26:05.594942  814391 status.go:176] ha-043378 status: &{Name:ha-043378 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 17:26:05.594966  814391 status.go:174] checking status of ha-043378-m02 ...
	I0510 17:26:05.595243  814391 cli_runner.go:164] Run: docker container inspect ha-043378-m02 --format={{.State.Status}}
	I0510 17:26:05.612336  814391 status.go:371] ha-043378-m02 host status = "Stopped" (err=<nil>)
	I0510 17:26:05.612357  814391 status.go:384] host is not running, skipping remaining checks
	I0510 17:26:05.612364  814391 status.go:176] ha-043378-m02 status: &{Name:ha-043378-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 17:26:05.612398  814391 status.go:174] checking status of ha-043378-m04 ...
	I0510 17:26:05.612649  814391 cli_runner.go:164] Run: docker container inspect ha-043378-m04 --format={{.State.Status}}
	I0510 17:26:05.629675  814391 status.go:371] ha-043378-m04 host status = "Stopped" (err=<nil>)
	I0510 17:26:05.629724  814391 status.go:384] host is not running, skipping remaining checks
	I0510 17:26:05.629734  814391 status.go:176] ha-043378-m04 status: &{Name:ha-043378-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0510 17:26:48.919801  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:27:03.265408  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-043378 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m24.26707386s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (85.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-043378 node add --control-plane --alsologtostderr -v 5: (36.70954422s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-043378 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (71.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-801956 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0510 17:29:05.060560  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-801956 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m11.93822573s)
--- PASS: TestJSONOutput/start/Command (71.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-801956 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-801956 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-801956 --output=json --user=testUser
E0510 17:29:32.763296  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-801956 --output=json --user=testUser: (5.761582285s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-776823 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-776823 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.109388ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bd1bcb96-c2a6-4e78-82ef-360090bbe495","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-776823] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a6c944e-4499-476a-a243-08acec2ba7a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20720"}}
	{"specversion":"1.0","id":"4f6ff400-229f-41ed-87bb-dd48bafdfaeb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e2c8a84a-2b0d-45b0-a07f-bdda7e6b4850","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig"}}
	{"specversion":"1.0","id":"38d1236c-e432-4dad-9824-95e5a8e0cac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube"}}
	{"specversion":"1.0","id":"58c070c3-44cb-448d-8ded-d1b05c3ac157","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ef5e637d-e7ee-4bbc-9f51-214d51d249b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"35dba06b-86bd-4310-9303-d435c4454314","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-776823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-776823
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-011326 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-011326 --network=: (31.269473236s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-011326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-011326
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-011326: (2.082415497s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.37s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-164421 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-164421 --network=bridge: (25.442962194s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-164421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-164421
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-164421: (1.912992942s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.37s)

                                                
                                    
x
+
TestKicExistingNetwork (24.24s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0510 17:30:41.948681  729815 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0510 17:30:41.965957  729815 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0510 17:30:41.966037  729815 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0510 17:30:41.966061  729815 cli_runner.go:164] Run: docker network inspect existing-network
W0510 17:30:41.981608  729815 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0510 17:30:41.981638  729815 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0510 17:30:41.981652  729815 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0510 17:30:41.981797  729815 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0510 17:30:41.998873  729815 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a012d972c6af IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:ec:c7:22:93:07} reservation:<nil>}
I0510 17:30:41.999328  729815 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001571050}
I0510 17:30:41.999368  729815 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0510 17:30:41.999435  729815 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0510 17:30:42.049621  729815 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-647129 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-647129 --network=existing-network: (22.185334821s)
helpers_test.go:175: Cleaning up "existing-network-647129" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-647129
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-647129: (1.923269909s)
I0510 17:31:06.175069  729815 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.24s)

                                                
                                    
x
+
TestKicCustomSubnet (25.22s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-335323 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-335323 --subnet=192.168.60.0/24: (23.149349852s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-335323 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-335323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-335323
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-335323: (2.047948413s)
--- PASS: TestKicCustomSubnet (25.22s)

                                                
                                    
x
+
TestKicStaticIP (27.14s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-411452 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-411452 --static-ip=192.168.200.200: (24.951481449s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-411452 ip
helpers_test.go:175: Cleaning up "static-ip-411452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-411452
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-411452: (2.059179474s)
--- PASS: TestKicStaticIP (27.14s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (52.26s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-057523 --driver=docker  --container-runtime=crio
E0510 17:32:03.268997  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-057523 --driver=docker  --container-runtime=crio: (24.033956528s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-071541 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-071541 --driver=docker  --container-runtime=crio: (23.010842945s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-057523
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-071541
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-071541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-071541
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-071541: (1.844927979s)
helpers_test.go:175: Cleaning up "first-057523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-057523
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-057523: (2.227832286s)
--- PASS: TestMinikubeProfile (52.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-910335 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-910335 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.184017296s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-910335 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-926155 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-926155 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.232925297s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-926155 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-910335 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-910335 --alsologtostderr -v=5: (1.579284791s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-926155 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-926155
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-926155: (1.173987061s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.06s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-926155
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-926155: (6.060558485s)
--- PASS: TestMountStart/serial/RestartStopped (7.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-926155 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (94.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-729056 --wait=true --memory=2200 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0510 17:34:05.059923  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-729056 --wait=true --memory=2200 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m33.754908681s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (94.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-729056 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-729056 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-729056 -- rollout status deployment/busybox: (1.978152769s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-729056 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-729056 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-729056 -- exec busybox-58667487b6-q675l -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-729056 -- exec busybox-58667487b6-qdqqm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-729056 -- exec busybox-58667487b6-q675l -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-729056 -- exec busybox-58667487b6-qdqqm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-729056 -- exec busybox-58667487b6-q675l -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-729056 -- exec busybox-58667487b6-qdqqm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-729056 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-729056 -- exec busybox-58667487b6-q675l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-729056 -- exec busybox-58667487b6-q675l -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-729056 -- exec busybox-58667487b6-qdqqm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-729056 -- exec busybox-58667487b6-qdqqm -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-729056 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-729056 -v=5 --alsologtostderr: (24.11338662s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.70s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-729056 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 cp testdata/cp-test.txt multinode-729056:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 cp multinode-729056:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1677622084/001/cp-test_multinode-729056.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 cp multinode-729056:/home/docker/cp-test.txt multinode-729056-m02:/home/docker/cp-test_multinode-729056_multinode-729056-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056-m02 "sudo cat /home/docker/cp-test_multinode-729056_multinode-729056-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 cp multinode-729056:/home/docker/cp-test.txt multinode-729056-m03:/home/docker/cp-test_multinode-729056_multinode-729056-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056-m03 "sudo cat /home/docker/cp-test_multinode-729056_multinode-729056-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 cp testdata/cp-test.txt multinode-729056-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 cp multinode-729056-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1677622084/001/cp-test_multinode-729056-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 cp multinode-729056-m02:/home/docker/cp-test.txt multinode-729056:/home/docker/cp-test_multinode-729056-m02_multinode-729056.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056 "sudo cat /home/docker/cp-test_multinode-729056-m02_multinode-729056.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 cp multinode-729056-m02:/home/docker/cp-test.txt multinode-729056-m03:/home/docker/cp-test_multinode-729056-m02_multinode-729056-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056-m03 "sudo cat /home/docker/cp-test_multinode-729056-m02_multinode-729056-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 cp testdata/cp-test.txt multinode-729056-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 cp multinode-729056-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1677622084/001/cp-test_multinode-729056-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 cp multinode-729056-m03:/home/docker/cp-test.txt multinode-729056:/home/docker/cp-test_multinode-729056-m03_multinode-729056.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056 "sudo cat /home/docker/cp-test_multinode-729056-m03_multinode-729056.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 cp multinode-729056-m03:/home/docker/cp-test.txt multinode-729056-m02:/home/docker/cp-test_multinode-729056-m03_multinode-729056-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 ssh -n multinode-729056-m02 "sudo cat /home/docker/cp-test_multinode-729056-m03_multinode-729056-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-729056 node stop m03: (1.176648073s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-729056 status: exit status 7 (458.697611ms)

                                                
                                                
-- stdout --
	multinode-729056
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-729056-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-729056-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-729056 status --alsologtostderr: exit status 7 (448.459914ms)

                                                
                                                
-- stdout --
	multinode-729056
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-729056-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-729056-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 17:35:28.074031  880413 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:35:28.074337  880413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:35:28.074348  880413 out.go:358] Setting ErrFile to fd 2...
	I0510 17:35:28.074352  880413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:35:28.074528  880413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:35:28.074682  880413 out.go:352] Setting JSON to false
	I0510 17:35:28.074713  880413 mustload.go:65] Loading cluster: multinode-729056
	I0510 17:35:28.074825  880413 notify.go:220] Checking for updates...
	I0510 17:35:28.075104  880413 config.go:182] Loaded profile config "multinode-729056": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:35:28.075126  880413 status.go:174] checking status of multinode-729056 ...
	I0510 17:35:28.075598  880413 cli_runner.go:164] Run: docker container inspect multinode-729056 --format={{.State.Status}}
	I0510 17:35:28.093102  880413 status.go:371] multinode-729056 host status = "Running" (err=<nil>)
	I0510 17:35:28.093146  880413 host.go:66] Checking if "multinode-729056" exists ...
	I0510 17:35:28.093411  880413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-729056
	I0510 17:35:28.112311  880413 host.go:66] Checking if "multinode-729056" exists ...
	I0510 17:35:28.112650  880413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 17:35:28.112698  880413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-729056
	I0510 17:35:28.129103  880413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33274 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/multinode-729056/id_rsa Username:docker}
	I0510 17:35:28.213221  880413 ssh_runner.go:195] Run: systemctl --version
	I0510 17:35:28.217333  880413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 17:35:28.227843  880413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:35:28.277390  880413 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:63 SystemTime:2025-05-10 17:35:28.268671279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:35:28.277984  880413 kubeconfig.go:125] found "multinode-729056" server: "https://192.168.67.2:8443"
	I0510 17:35:28.278017  880413 api_server.go:166] Checking apiserver status ...
	I0510 17:35:28.278057  880413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0510 17:35:28.288927  880413 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	I0510 17:35:28.297696  880413 api_server.go:182] apiserver freezer: "11:freezer:/docker/0fb99be3ede7907ec7af7f49e21a019ebe58fa83a3ebf9f0f4e070421be9a910/crio/crio-7951184cad7f5485de0b8971ea6d9c93efdb3a9c0868c05e5d0e235575f04e7d"
	I0510 17:35:28.297784  880413 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0fb99be3ede7907ec7af7f49e21a019ebe58fa83a3ebf9f0f4e070421be9a910/crio/crio-7951184cad7f5485de0b8971ea6d9c93efdb3a9c0868c05e5d0e235575f04e7d/freezer.state
	I0510 17:35:28.305617  880413 api_server.go:204] freezer state: "THAWED"
	I0510 17:35:28.305645  880413 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0510 17:35:28.309363  880413 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0510 17:35:28.309388  880413 status.go:463] multinode-729056 apiserver status = Running (err=<nil>)
	I0510 17:35:28.309403  880413 status.go:176] multinode-729056 status: &{Name:multinode-729056 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 17:35:28.309424  880413 status.go:174] checking status of multinode-729056-m02 ...
	I0510 17:35:28.309659  880413 cli_runner.go:164] Run: docker container inspect multinode-729056-m02 --format={{.State.Status}}
	I0510 17:35:28.326739  880413 status.go:371] multinode-729056-m02 host status = "Running" (err=<nil>)
	I0510 17:35:28.326766  880413 host.go:66] Checking if "multinode-729056-m02" exists ...
	I0510 17:35:28.327020  880413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-729056-m02
	I0510 17:35:28.343825  880413 host.go:66] Checking if "multinode-729056-m02" exists ...
	I0510 17:35:28.344102  880413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0510 17:35:28.344146  880413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-729056-m02
	I0510 17:35:28.360568  880413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33279 SSHKeyPath:/home/jenkins/minikube-integration/20720-722920/.minikube/machines/multinode-729056-m02/id_rsa Username:docker}
	I0510 17:35:28.444649  880413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0510 17:35:28.455180  880413 status.go:176] multinode-729056-m02 status: &{Name:multinode-729056-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0510 17:35:28.455215  880413 status.go:174] checking status of multinode-729056-m03 ...
	I0510 17:35:28.455498  880413 cli_runner.go:164] Run: docker container inspect multinode-729056-m03 --format={{.State.Status}}
	I0510 17:35:28.472243  880413 status.go:371] multinode-729056-m03 host status = "Stopped" (err=<nil>)
	I0510 17:35:28.472265  880413 status.go:384] host is not running, skipping remaining checks
	I0510 17:35:28.472271  880413 status.go:176] multinode-729056-m03 status: &{Name:multinode-729056-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-729056 node start m03 -v=5 --alsologtostderr: (6.818829081s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (70.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-729056
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-729056
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-729056: (24.700520895s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-729056 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-729056 --wait=true -v=5 --alsologtostderr: (45.928620407s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-729056
--- PASS: TestMultiNode/serial/RestartKeepsNodes (70.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-729056 node delete m03: (4.668639279s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 stop
E0510 17:37:03.272280  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-729056 stop: (23.547867645s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-729056 status: exit status 7 (89.526691ms)

                                                
                                                
-- stdout --
	multinode-729056
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-729056-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-729056 status --alsologtostderr: exit status 7 (84.205397ms)

                                                
                                                
-- stdout --
	multinode-729056
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-729056-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 17:37:15.592300  890088 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:37:15.592438  890088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:37:15.592450  890088 out.go:358] Setting ErrFile to fd 2...
	I0510 17:37:15.592456  890088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:37:15.592648  890088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:37:15.592849  890088 out.go:352] Setting JSON to false
	I0510 17:37:15.592894  890088 mustload.go:65] Loading cluster: multinode-729056
	I0510 17:37:15.593051  890088 notify.go:220] Checking for updates...
	I0510 17:37:15.593334  890088 config.go:182] Loaded profile config "multinode-729056": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:37:15.593360  890088 status.go:174] checking status of multinode-729056 ...
	I0510 17:37:15.593800  890088 cli_runner.go:164] Run: docker container inspect multinode-729056 --format={{.State.Status}}
	I0510 17:37:15.611262  890088 status.go:371] multinode-729056 host status = "Stopped" (err=<nil>)
	I0510 17:37:15.611291  890088 status.go:384] host is not running, skipping remaining checks
	I0510 17:37:15.611301  890088 status.go:176] multinode-729056 status: &{Name:multinode-729056 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0510 17:37:15.611350  890088 status.go:174] checking status of multinode-729056-m02 ...
	I0510 17:37:15.611726  890088 cli_runner.go:164] Run: docker container inspect multinode-729056-m02 --format={{.State.Status}}
	I0510 17:37:15.628757  890088 status.go:371] multinode-729056-m02 host status = "Stopped" (err=<nil>)
	I0510 17:37:15.628779  890088 status.go:384] host is not running, skipping remaining checks
	I0510 17:37:15.628786  890088 status.go:176] multinode-729056-m02 status: &{Name:multinode-729056-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-729056 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-729056 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.504006314s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-729056 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.11s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-729056
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-729056-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-729056-m02 --driver=docker  --container-runtime=crio: exit status 14 (68.704937ms)

                                                
                                                
-- stdout --
	* [multinode-729056-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-729056-m02' is duplicated with machine name 'multinode-729056-m02' in profile 'multinode-729056'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-729056-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-729056-m03 --driver=docker  --container-runtime=crio: (22.799967766s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-729056
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-729056: exit status 80 (270.492871ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-729056 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-729056-m03 already exists in multinode-729056-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-729056-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-729056-m03: (1.833656061s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.02s)

                                                
                                    
x
+
TestPreload (116.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-485520 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0510 17:39:05.060642  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-485520 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m17.255048801s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-485520 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-485520 image pull gcr.io/k8s-minikube/busybox: (1.204086989s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-485520
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-485520: (5.720306442s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-485520 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0510 17:40:06.336035  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:40:28.126051  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-485520 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (30.104455127s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-485520 image list
helpers_test.go:175: Cleaning up "test-preload-485520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-485520
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-485520: (2.28900054s)
--- PASS: TestPreload (116.79s)

                                                
                                    
x
+
TestScheduledStopUnix (98.06s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-268097 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-268097 --memory=2048 --driver=docker  --container-runtime=crio: (22.100021501s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-268097 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-268097 -n scheduled-stop-268097
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-268097 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0510 17:40:55.019325  729815 retry.go:31] will retry after 119.331µs: open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/scheduled-stop-268097/pid: no such file or directory
I0510 17:40:55.020543  729815 retry.go:31] will retry after 206.822µs: open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/scheduled-stop-268097/pid: no such file or directory
I0510 17:40:55.021675  729815 retry.go:31] will retry after 259.621µs: open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/scheduled-stop-268097/pid: no such file or directory
I0510 17:40:55.022819  729815 retry.go:31] will retry after 324.883µs: open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/scheduled-stop-268097/pid: no such file or directory
I0510 17:40:55.023955  729815 retry.go:31] will retry after 253.917µs: open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/scheduled-stop-268097/pid: no such file or directory
I0510 17:40:55.025107  729815 retry.go:31] will retry after 905.735µs: open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/scheduled-stop-268097/pid: no such file or directory
I0510 17:40:55.026230  729815 retry.go:31] will retry after 705.147µs: open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/scheduled-stop-268097/pid: no such file or directory
I0510 17:40:55.027370  729815 retry.go:31] will retry after 1.386399ms: open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/scheduled-stop-268097/pid: no such file or directory
I0510 17:40:55.029620  729815 retry.go:31] will retry after 3.499728ms: open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/scheduled-stop-268097/pid: no such file or directory
I0510 17:40:55.033867  729815 retry.go:31] will retry after 4.676875ms: open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/scheduled-stop-268097/pid: no such file or directory
I0510 17:40:55.039081  729815 retry.go:31] will retry after 7.931177ms: open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/scheduled-stop-268097/pid: no such file or directory
I0510 17:40:55.047361  729815 retry.go:31] will retry after 8.50132ms: open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/scheduled-stop-268097/pid: no such file or directory
I0510 17:40:55.056560  729815 retry.go:31] will retry after 13.31213ms: open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/scheduled-stop-268097/pid: no such file or directory
I0510 17:40:55.070940  729815 retry.go:31] will retry after 25.290721ms: open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/scheduled-stop-268097/pid: no such file or directory
I0510 17:40:55.097231  729815 retry.go:31] will retry after 41.301662ms: open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/scheduled-stop-268097/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-268097 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-268097 -n scheduled-stop-268097
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-268097
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-268097 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0510 17:42:03.272019  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-268097
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-268097: exit status 7 (67.998859ms)

                                                
                                                
-- stdout --
	scheduled-stop-268097
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-268097 -n scheduled-stop-268097
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-268097 -n scheduled-stop-268097: exit status 7 (68.749963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-268097" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-268097
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-268097: (4.612698467s)
--- PASS: TestScheduledStopUnix (98.06s)

                                                
                                    
x
+
TestInsufficientStorage (9.99s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-089837 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-089837 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.672588829s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fc262c19-a023-47d4-91d0-c55980e712f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-089837] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f207375-e923-48c1-8a95-cc7b0ba3d1f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20720"}}
	{"specversion":"1.0","id":"558741e0-5826-45c1-9eb0-b37cfeb0c25b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3a766275-4111-49df-a37b-0abb32a162e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig"}}
	{"specversion":"1.0","id":"6de21cc0-401a-4cab-ab5f-85cf20543f5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube"}}
	{"specversion":"1.0","id":"738d59a5-0bf5-45df-bba9-4fda8e10c986","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a49203f2-7d77-44d6-a17e-ddddfeecf7f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d10641e8-fbfd-43ab-ade0-41be79d077fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"45d5d29a-b5f7-40ff-a9e3-462b156b0623","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7e82b8e8-8ba9-41b5-82fa-d9ea98b21b3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"97185192-abcd-434b-969d-dcfac4996e65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5e9805af-0264-472a-8d6d-64517e14f725","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-089837\" primary control-plane node in \"insufficient-storage-089837\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"32f514df-1f6f-40d0-b33f-2b6d860fdcb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1746731792-20718 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"641ab264-26a8-4904-b964-a077db9d73cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c3415c5-7251-466d-9397-8d685297dd76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-089837 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-089837 --output=json --layout=cluster: exit status 7 (262.591473ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-089837","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-089837","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0510 17:42:18.496880  912644 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-089837" does not appear in /home/jenkins/minikube-integration/20720-722920/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-089837 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-089837 --output=json --layout=cluster: exit status 7 (260.247169ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-089837","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-089837","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0510 17:42:18.758106  912744 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-089837" does not appear in /home/jenkins/minikube-integration/20720-722920/kubeconfig
	E0510 17:42:18.768216  912744 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/insufficient-storage-089837/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-089837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-089837
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-089837: (1.793944952s)
--- PASS: TestInsufficientStorage (9.99s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (62.07s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1663220614 start -p running-upgrade-966687 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1663220614 start -p running-upgrade-966687 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.982934495s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-966687 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-966687 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.810352586s)
helpers_test.go:175: Cleaning up "running-upgrade-966687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-966687
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-966687: (4.811765376s)
--- PASS: TestRunningBinaryUpgrade (62.07s)

                                                
                                    
x
+
TestKubernetesUpgrade (202.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-048105 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-048105 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.413804474s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-048105
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-048105: (2.118522554s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-048105 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-048105 status --format={{.Host}}: exit status 7 (72.215913ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-048105 --memory=2200 --kubernetes-version=v1.33.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-048105 --memory=2200 --kubernetes-version=v1.33.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m53.136819727s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-048105 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-048105 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-048105 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (72.85945ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-048105] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.33.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-048105
	    minikube start -p kubernetes-upgrade-048105 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0481052 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.33.0, by running:
	    
	    minikube start -p kubernetes-upgrade-048105 --kubernetes-version=v1.33.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-048105 --memory=2200 --kubernetes-version=v1.33.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-048105 --memory=2200 --kubernetes-version=v1.33.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.479744954s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-048105" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-048105
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-048105: (4.18638926s)
--- PASS: TestKubernetesUpgrade (202.54s)

                                                
                                    
x
+
TestMissingContainerUpgrade (130.03s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.332803757 start -p missing-upgrade-024160 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.332803757 start -p missing-upgrade-024160 --memory=2200 --driver=docker  --container-runtime=crio: (1m4.406409396s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-024160
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-024160: (12.130623551s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-024160
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-024160 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-024160 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.004355808s)
helpers_test.go:175: Cleaning up "missing-upgrade-024160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-024160
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-024160: (2.043410307s)
--- PASS: TestMissingContainerUpgrade (130.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-640513 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-640513 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (73.021139ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-640513] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-640513 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-640513 --driver=docker  --container-runtime=crio: (35.85000743s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-640513 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (92.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2330175750 start -p stopped-upgrade-732817 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2330175750 start -p stopped-upgrade-732817 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m2.218428039s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2330175750 -p stopped-upgrade-732817 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2330175750 -p stopped-upgrade-732817 stop: (2.434822539s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-732817 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-732817 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.045916757s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (92.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (12.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-640513 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-640513 --no-kubernetes --driver=docker  --container-runtime=crio: (10.127749305s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-640513 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-640513 status -o json: exit status 2 (280.071948ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-640513","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-640513
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-640513: (1.900523555s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (12.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-640513 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-640513 --no-kubernetes --driver=docker  --container-runtime=crio: (7.615435226s)
--- PASS: TestNoKubernetes/serial/Start (7.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-640513 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-640513 "sudo systemctl is-active --quiet service kubelet": exit status 1 (299.610218ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (5.058015244s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.011950578s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-640513
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-640513: (1.223108439s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-640513 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-640513 --driver=docker  --container-runtime=crio: (6.493903953s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-640513 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-640513 "sudo systemctl is-active --quiet service kubelet": exit status 1 (259.890932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-732817
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-732817: (1.28693983s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-278190 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-278190 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (170.662452ms)

                                                
                                                
-- stdout --
	* [false-278190] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20720
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0510 17:44:02.509796  940079 out.go:345] Setting OutFile to fd 1 ...
	I0510 17:44:02.509915  940079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:44:02.509920  940079 out.go:358] Setting ErrFile to fd 2...
	I0510 17:44:02.509927  940079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0510 17:44:02.510194  940079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20720-722920/.minikube/bin
	I0510 17:44:02.510946  940079 out.go:352] Setting JSON to false
	I0510 17:44:02.512453  940079 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12389,"bootTime":1746886653,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0510 17:44:02.512558  940079 start.go:140] virtualization: kvm guest
	I0510 17:44:02.515067  940079 out.go:177] * [false-278190] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0510 17:44:02.516667  940079 out.go:177]   - MINIKUBE_LOCATION=20720
	I0510 17:44:02.516688  940079 notify.go:220] Checking for updates...
	I0510 17:44:02.519396  940079 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0510 17:44:02.520796  940079 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20720-722920/kubeconfig
	I0510 17:44:02.522229  940079 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20720-722920/.minikube
	I0510 17:44:02.523568  940079 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0510 17:44:02.525095  940079 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0510 17:44:02.527219  940079 config.go:182] Loaded profile config "force-systemd-env-718671": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
	I0510 17:44:02.527399  940079 config.go:182] Loaded profile config "kubernetes-upgrade-048105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0510 17:44:02.527548  940079 config.go:182] Loaded profile config "missing-upgrade-024160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0510 17:44:02.527662  940079 driver.go:404] Setting default libvirt URI to qemu:///system
	I0510 17:44:02.552120  940079 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0510 17:44:02.552230  940079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0510 17:44:02.613392  940079 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:true NGoroutines:85 SystemTime:2025-05-10 17:44:02.602651392 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647968256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0510 17:44:02.613548  940079 docker.go:318] overlay module found
	I0510 17:44:02.615307  940079 out.go:177] * Using the docker driver based on user configuration
	I0510 17:44:02.616669  940079 start.go:304] selected driver: docker
	I0510 17:44:02.616685  940079 start.go:908] validating driver "docker" against <nil>
	I0510 17:44:02.616701  940079 start.go:919] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0510 17:44:02.619376  940079 out.go:201] 
	W0510 17:44:02.620870  940079 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0510 17:44:02.622273  940079 out.go:201] 

                                                
                                                
** /stderr **
E0510 17:44:05.059481  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:88: 
----------------------- debugLogs start: false-278190 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-278190

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-278190

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-278190

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-278190

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-278190

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-278190

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-278190

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-278190

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-278190

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-278190

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-278190

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-278190" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-278190" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 10 May 2025 17:43:24 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-024160
contexts:
- context:
cluster: missing-upgrade-024160
extensions:
- extension:
last-update: Sat, 10 May 2025 17:43:24 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-024160
name: missing-upgrade-024160
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-024160
user:
client-certificate: /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/missing-upgrade-024160/client.crt
client-key: /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/missing-upgrade-024160/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-278190

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-278190"

                                                
                                                
----------------------- debugLogs end: false-278190 [took: 3.002129772s] --------------------------------
helpers_test.go:175: Cleaning up "false-278190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-278190
--- PASS: TestNetworkPlugins/group/false (3.33s)

                                                
                                    
x
+
TestPause/serial/Start (39.54s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-794419 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-794419 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (39.544516784s)
--- PASS: TestPause/serial/Start (39.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-278190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-278190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.486770398s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.49s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-794419 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-794419 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.220205628s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-278190 "pgrep -a kubelet"
I0510 17:46:21.612875  729815 config.go:182] Loaded profile config "auto-278190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-278190 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9c99x" [022b1223-901d-4391-b48f-006123b36484] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-9c99x" [022b1223-901d-4391-b48f-006123b36484] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00390818s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                    
x
+
TestPause/serial/Pause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-794419 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-794419 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-794419 --output=json --layout=cluster: exit status 2 (283.407075ms)

                                                
                                                
-- stdout --
	{"Name":"pause-794419","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-794419","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-794419 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.74s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-794419 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.74s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.61s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-794419 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-794419 --alsologtostderr -v=5: (2.613119461s)
--- PASS: TestPause/serial/DeletePaused (2.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-278190 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.75s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-794419
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-794419: exit status 1 (16.19945ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-794419: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-278190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-278190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (55.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-278190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-278190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (55.839311707s)
--- PASS: TestNetworkPlugins/group/calico/Start (55.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-278190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-278190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (54.319707238s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-278190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0510 17:47:03.265159  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-278190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.399973934s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ngq7j" [ebc4ae13-b635-4231-82bd-b86b732a283c] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-ngq7j" [ebc4ae13-b635-4231-82bd-b86b732a283c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004272321s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-278190 "pgrep -a kubelet"
I0510 17:47:33.927886  729815 config.go:182] Loaded profile config "calico-278190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-278190 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-79qkx" [074fe108-6018-4917-ac66-147362d37af2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-79qkx" [074fe108-6018-4917-ac66-147362d37af2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003502015s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tbt2m" [5b505a1c-9ed3-4226-ac2e-90e409d018f5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0040042s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-278190 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-278190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-278190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-278190 "pgrep -a kubelet"
I0510 17:47:43.941637  729815 config.go:182] Loaded profile config "kindnet-278190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-278190 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-sbwkt" [625eb9a1-da97-4f18-9bc7-953195c363e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-sbwkt" [625eb9a1-da97-4f18-9bc7-953195c363e7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003986077s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-278190 "pgrep -a kubelet"
I0510 17:47:44.422257  729815 config.go:182] Loaded profile config "custom-flannel-278190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-278190 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-427j5" [eef596f6-3d4c-47c6-ab44-25d8379a8a68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-427j5" [eef596f6-3d4c-47c6-ab44-25d8379a8a68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004409546s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-278190 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-278190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-278190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-278190 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-278190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-278190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-278190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-278190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (54.132008539s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-278190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-278190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m11.38196725s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-278190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-278190 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m9.226370345s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (132.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-697935 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-697935 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m12.123511847s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (132.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gwf5j" [d7fcbc60-08e8-49df-8597-27efe580339c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003919419s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-278190 "pgrep -a kubelet"
I0510 17:49:03.388970  729815 config.go:182] Loaded profile config "flannel-278190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-278190 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-6bwmh" [6b44b2f6-35a7-4ed3-902a-8bdc6639e9e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0510 17:49:05.060196  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-6bwmh" [6b44b2f6-35a7-4ed3-902a-8bdc6639e9e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004016349s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-278190 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-278190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-278190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-278190 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-278190 "pgrep -a kubelet"
I0510 17:49:24.633874  729815 config.go:182] Loaded profile config "bridge-278190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-278190 replace --force -f testdata/netcat-deployment.yaml
I0510 17:49:24.761377  729815 config.go:182] Loaded profile config "enable-default-cni-278190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.33.0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-csbnz" [9f2a18ea-4022-4615-be30-9154aeeae26d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-csbnz" [9f2a18ea-4022-4615-be30-9154aeeae26d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003961883s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-278190 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jklqj" [0f08c875-7d50-48ff-b994-05278101ccb1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jklqj" [0f08c875-7d50-48ff-b994-05278101ccb1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004183314s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-058078 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-058078 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0: (1m3.346034199s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-278190 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-278190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-278190 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-278190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)
E0510 17:53:05.162218  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:53:08.647872  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:53:18.657782  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:53:25.644156  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:53:49.609391  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:53:57.104788  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:53:57.111233  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:53:57.122645  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:53:57.144117  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:53:57.185581  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:53:57.267200  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:53:57.428754  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:53:57.750884  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:53:58.393120  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:53:59.619405  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:53:59.674946  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:02.236842  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:05.059906  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:05.668117  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:06.606355  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:07.358431  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:17.600571  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:24.844910  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:24.851362  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:24.862731  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:24.884184  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:24.925630  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:25.002259  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:25.007751  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:25.008931  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:25.020346  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:25.041730  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:25.083382  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:25.164931  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:25.169287  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:25.326345  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:25.490959  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:25.647642  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:26.132594  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:26.289937  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:27.414159  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:27.571727  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:29.976453  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:30.134043  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:35.098625  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:35.256232  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:38.082500  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:45.340806  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:54:45.498328  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:05.822461  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:05.979963  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:11.531612  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:19.044793  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:21.540764  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:28.528258  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:36.299877  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:36.306272  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:36.317626  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:36.339065  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:36.380563  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:36.462082  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:36.623580  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:36.945287  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:37.587281  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:38.869266  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:41.431584  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:46.553271  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:46.784609  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:46.942129  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:55:56.795421  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:56:17.276942  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:56:21.807603  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:56:40.966163  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:56:46.338396  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:56:49.510273  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:56:58.238777  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:57:03.265626  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:57:08.127590  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:57:08.706724  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:57:08.864272  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:57:27.671011  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:57:37.679346  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:57:44.667871  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:57:55.373511  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:58:05.382314  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:58:12.369975  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:58:20.160691  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:58:57.104724  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:05.059387  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/functional-914764/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:24.808124  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:24.845630  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:25.002298  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:52.548724  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/bridge-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:59:52.705664  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/enable-default-cni-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 18:00:36.299456  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/no-preload-058078/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-278190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-278190 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (47.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-256321 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-256321 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0: (47.20012086s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (47.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-676255 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-676255 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0: (45.368395081s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-697935 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [994897ed-c87f-4f7e-92b5-a7b13364ce05] Pending
helpers_test.go:344: "busybox" [994897ed-c87f-4f7e-92b5-a7b13364ce05] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [994897ed-c87f-4f7e-92b5-a7b13364ce05] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.002948846s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-697935 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-058078 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1034a2ed-32e1-4727-b07d-d2eedc310eec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1034a2ed-32e1-4727-b07d-d2eedc310eec] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003445751s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-058078 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-697935 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-697935 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-697935 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-697935 --alsologtostderr -v=3: (12.012865521s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-676255 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [32b7d831-89a8-40d4-8dca-eef66272081b] Pending
helpers_test.go:344: "busybox" [32b7d831-89a8-40d4-8dca-eef66272081b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [32b7d831-89a8-40d4-8dca-eef66272081b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00377375s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-676255 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-058078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-058078 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-256321 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d981b7a4-899f-43b3-b6c5-c3f06b84555c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d981b7a4-899f-43b3-b6c5-c3f06b84555c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003678781s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-256321 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-058078 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-058078 --alsologtostderr -v=3: (11.891724419s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-256321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-256321 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-676255 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-676255 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-256321 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-256321 --alsologtostderr -v=3: (13.187365659s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-697935 -n old-k8s-version-697935
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-697935 -n old-k8s-version-697935: exit status 7 (83.835976ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-697935 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (111.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-697935 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-697935 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (1m50.877401105s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-697935 -n old-k8s-version-697935
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (111.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-676255 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-676255 --alsologtostderr -v=3: (12.595990189s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-058078 -n no-preload-058078
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-058078 -n no-preload-058078: exit status 7 (67.120683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-058078 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-058078 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-058078 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0: (52.427476133s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-058078 -n no-preload-058078
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-676255 -n default-k8s-diff-port-676255
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-676255 -n default-k8s-diff-port-676255: exit status 7 (95.472537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-676255 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256321 -n embed-certs-256321
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256321 -n embed-certs-256321: exit status 7 (93.630837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-256321 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-676255 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-676255 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0: (53.082703406s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-676255 -n default-k8s-diff-port-676255
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-256321 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0
E0510 17:51:21.807434  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:51:21.813820  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:51:21.826259  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:51:21.847673  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:51:21.889234  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:51:21.970727  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:51:22.132541  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:51:22.454528  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:51:23.095891  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:51:24.377525  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:51:26.938884  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:51:32.060260  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:51:42.302480  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-256321 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0: (49.709488286s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-256321 -n embed-certs-256321
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wdk6l" [fa22bc68-8eb1-4115-b4fe-9a2775e26e10] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004155536s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wdk6l" [fa22bc68-8eb1-4115-b4fe-9a2775e26e10] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002972244s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-058078 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-058078 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-058078 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-058078 -n no-preload-058078
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-058078 -n no-preload-058078: exit status 2 (287.816134ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-058078 -n no-preload-058078
E0510 17:52:02.784198  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-058078 -n no-preload-058078: exit status 2 (292.064011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-058078 --alsologtostderr -v=1
E0510 17:52:03.265242  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/addons-088134/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-058078 -n no-preload-058078
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-058078 -n no-preload-058078
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-173135 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0
E0510 17:52:27.671595  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:27.678036  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:27.689428  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:27.710838  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:27.752265  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:27.833714  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:27.995259  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:28.316789  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:28.958536  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:30.240638  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:32.802186  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-173135 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0: (30.787274787s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-173135 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0510 17:52:37.678658  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:37.687249  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:37.698650  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:37.720080  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:37.761405  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:37.842878  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:37.924356  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/calico-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:38.004881  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:38.326765  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-173135 --alsologtostderr -v=3
E0510 17:52:38.968124  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-173135 --alsologtostderr -v=3: (1.198894217s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-173135 -n newest-cni-173135
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-173135 -n newest-cni-173135: exit status 7 (71.273184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-173135 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-173135 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0
E0510 17:52:40.249981  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:42.811918  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/kindnet-278190/client.crt: no such file or directory" logger="UnhandledError"
E0510 17:52:43.745988  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/auto-278190/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-173135 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.33.0: (13.048472866s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-173135 -n newest-cni-173135
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-173135 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-173135 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-173135 -n newest-cni-173135
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-173135 -n newest-cni-173135: exit status 2 (295.820365ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-173135 -n newest-cni-173135
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-173135 -n newest-cni-173135: exit status 2 (294.273806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-173135 --alsologtostderr -v=1
E0510 17:52:54.920701  729815 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/custom-flannel-278190/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-173135 -n newest-cni-173135
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-173135 -n newest-cni-173135
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-256321 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-256321 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-256321 -n embed-certs-256321
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-256321 -n embed-certs-256321: exit status 2 (303.285895ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-256321 -n embed-certs-256321
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-256321 -n embed-certs-256321: exit status 2 (313.060496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-256321 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-256321 -n embed-certs-256321
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-256321 -n embed-certs-256321
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-676255 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-676255 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-676255 -n default-k8s-diff-port-676255
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-676255 -n default-k8s-diff-port-676255: exit status 2 (323.785632ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-676255 -n default-k8s-diff-port-676255
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-676255 -n default-k8s-diff-port-676255: exit status 2 (290.358772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-676255 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-676255 -n default-k8s-diff-port-676255
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-676255 -n default-k8s-diff-port-676255
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-697935 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-697935 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-697935 -n old-k8s-version-697935
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-697935 -n old-k8s-version-697935: exit status 2 (276.100506ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-697935 -n old-k8s-version-697935
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-697935 -n old-k8s-version-697935: exit status 2 (277.501934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-697935 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-697935 -n old-k8s-version-697935
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-697935 -n old-k8s-version-697935
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.41s)

                                                
                                    

Test skip (27/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.33.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.33.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.33.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.33.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.33.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-088134 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-278190 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-278190

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-278190

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-278190

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-278190

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-278190

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-278190

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-278190

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-278190

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-278190

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-278190

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-278190

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-278190" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-278190" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 10 May 2025 17:43:24 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-024160
contexts:
- context:
cluster: missing-upgrade-024160
extensions:
- extension:
last-update: Sat, 10 May 2025 17:43:24 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-024160
name: missing-upgrade-024160
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-024160
user:
client-certificate: /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/missing-upgrade-024160/client.crt
client-key: /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/missing-upgrade-024160/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-278190

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-278190"

                                                
                                                
----------------------- debugLogs end: kubenet-278190 [took: 5.058453867s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-278190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-278190
--- SKIP: TestNetworkPlugins/group/kubenet (5.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-278190 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-278190" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-278190" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-278190" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20720-722920/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 10 May 2025 17:43:24 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-024160
contexts:
- context:
cluster: missing-upgrade-024160
extensions:
- extension:
last-update: Sat, 10 May 2025 17:43:24 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-024160
name: missing-upgrade-024160
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-024160
user:
client-certificate: /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/missing-upgrade-024160/client.crt
client-key: /home/jenkins/minikube-integration/20720-722920/.minikube/profiles/missing-upgrade-024160/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-278190

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-278190" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-278190"

                                                
                                                
----------------------- debugLogs end: cilium-278190 [took: 4.594844814s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-278190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-278190
--- SKIP: TestNetworkPlugins/group/cilium (4.80s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-161072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-161072
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard