Test Report: Docker_Linux_containerd 22054

                    
                      83cf6fd59e5d8f3d63346b28bfbd6fd8e1f567be:2025-12-07:42677
                    
                

Test fail (23/418)

Order failed test Duration
46 TestAddons/parallel/Ingress 491.47
53 TestAddons/parallel/LocalPath 344.79
63 TestDockerEnvContainerd 35.51
99 TestFunctional/parallel/DashboardCmd 302.08
106 TestFunctional/parallel/ServiceCmdConnect 602.74
108 TestFunctional/parallel/PersistentVolumeClaim 367.87
112 TestFunctional/parallel/MySQL 602.54
123 TestFunctional/parallel/ServiceCmd/DeployApp 600.61
152 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 240.64
153 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 111.84
160 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
161 TestFunctional/parallel/ServiceCmd/Format 0.52
162 TestFunctional/parallel/ServiceCmd/URL 0.52
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 3.08
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 602.74
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 367.74
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 602.68
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 600.62
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 240.63
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 97.9
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.53
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.52
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.52
x
+
TestAddons/parallel/Ingress (491.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-007439 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-007439 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-007439 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [3c9264fd-2643-4039-b4ae-33eb195a30d1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-007439 -n addons-007439
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-12-07 22:42:32.12985584 +0000 UTC m=+692.282521913
addons_test.go:252: (dbg) Run:  kubectl --context addons-007439 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-007439 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-007439/192.168.49.2
Start Time:       Sun, 07 Dec 2025 22:34:31 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.33
IPs:
IP:  10.244.0.33
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qg2pq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qg2pq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age    From               Message
----     ------     ----   ----               -------
Normal   Scheduled  8m1s   default-scheduler  Successfully assigned default/nginx to addons-007439
Warning  Failed     7m42s  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling  4m58s (x5 over 8m)     kubelet  Pulling image "docker.io/nginx:alpine"
Warning  Failed   4m57s (x4 over 7m59s)  kubelet  Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   4m57s (x5 over 7m59s)   kubelet  Error: ErrImagePull
Normal   BackOff  2m54s (x21 over 7m58s)  kubelet  Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed   2m54s (x21 over 7m58s)  kubelet  Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-007439 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-007439 logs nginx -n default: exit status 1 (68.353949ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-007439 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-007439
helpers_test.go:243: (dbg) docker inspect addons-007439:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "95784a3a52e61063335d389754aa08b4d87bfedc08bf523428bfa3152dd5d7d7",
	        "Created": "2025-12-07T22:31:36.240822065Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 378269,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T22:31:36.269867677Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/95784a3a52e61063335d389754aa08b4d87bfedc08bf523428bfa3152dd5d7d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/95784a3a52e61063335d389754aa08b4d87bfedc08bf523428bfa3152dd5d7d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/95784a3a52e61063335d389754aa08b4d87bfedc08bf523428bfa3152dd5d7d7/hosts",
	        "LogPath": "/var/lib/docker/containers/95784a3a52e61063335d389754aa08b4d87bfedc08bf523428bfa3152dd5d7d7/95784a3a52e61063335d389754aa08b4d87bfedc08bf523428bfa3152dd5d7d7-json.log",
	        "Name": "/addons-007439",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-007439:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-007439",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "95784a3a52e61063335d389754aa08b4d87bfedc08bf523428bfa3152dd5d7d7",
	                "LowerDir": "/var/lib/docker/overlay2/80e9325d913045df6acf50f0e0f595785f44bf5cc35a62845dad774c1cd4c36c-init/diff:/var/lib/docker/overlay2/3cf0cc8fa8911fedbf601d360c4398569b0f30cd394076654dad88848c4dafaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/80e9325d913045df6acf50f0e0f595785f44bf5cc35a62845dad774c1cd4c36c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/80e9325d913045df6acf50f0e0f595785f44bf5cc35a62845dad774c1cd4c36c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/80e9325d913045df6acf50f0e0f595785f44bf5cc35a62845dad774c1cd4c36c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-007439",
	                "Source": "/var/lib/docker/volumes/addons-007439/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-007439",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-007439",
	                "name.minikube.sigs.k8s.io": "addons-007439",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f62c5320cdbde03cb54a53c02f096730436c8a76e0b78c5c13fc9cb071c192ec",
	            "SandboxKey": "/var/run/docker/netns/f62c5320cdbd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-007439": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "04fdabd91e38450778e4670e1c706e19a00f2f24341a0a6ec9a0300899913688",
	                    "EndpointID": "dd19f9f553ca01c1448abb8f35ca5d5acbe2ce55f612d24faadd69f4b35a5319",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "6a:9e:1f:f0:fc:e7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-007439",
	                        "95784a3a52e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-007439 -n addons-007439
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 logs -n 25
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                      ARGS                                                                                                                                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-009383                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-009383   │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │ 07 Dec 25 22:31 UTC │
	│ start   │ --download-only -p download-docker-804251 --alsologtostderr --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-804251 │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │                     │
	│ delete  │ -p download-docker-804251                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-docker-804251 │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │ 07 Dec 25 22:31 UTC │
	│ start   │ --download-only -p binary-mirror-557917 --alsologtostderr --binary-mirror http://127.0.0.1:44153 --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                               │ binary-mirror-557917   │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │                     │
	│ delete  │ -p binary-mirror-557917                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ binary-mirror-557917   │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │ 07 Dec 25 22:31 UTC │
	│ addons  │ enable dashboard -p addons-007439                                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │                     │
	│ addons  │ disable dashboard -p addons-007439                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │                     │
	│ start   │ -p addons-007439 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │ 07 Dec 25 22:33 UTC │
	│ addons  │ addons-007439 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:33 UTC │ 07 Dec 25 22:33 UTC │
	│ addons  │ addons-007439 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ enable headlamp -p addons-007439 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ ip      │ addons-007439 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-007439                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:35 UTC │
	│ addons  │ addons-007439 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:35 UTC │ 07 Dec 25 22:35 UTC │
	│ addons  │ addons-007439 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:35 UTC │ 07 Dec 25 22:35 UTC │
	│ addons  │ addons-007439 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:39 UTC │ 07 Dec 25 22:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:31:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:31:14.046059  377634 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:31:14.046160  377634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:31:14.046171  377634 out.go:374] Setting ErrFile to fd 2...
	I1207 22:31:14.046177  377634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:31:14.046369  377634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 22:31:14.046902  377634 out.go:368] Setting JSON to false
	I1207 22:31:14.047862  377634 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4423,"bootTime":1765142251,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:31:14.047960  377634 start.go:143] virtualization: kvm guest
	I1207 22:31:14.049737  377634 out.go:179] * [addons-007439] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:31:14.050928  377634 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:31:14.050931  377634 notify.go:221] Checking for updates...
	I1207 22:31:14.052304  377634 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:31:14.053561  377634 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 22:31:14.054761  377634 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 22:31:14.055729  377634 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:31:14.056780  377634 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:31:14.057907  377634 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:31:14.081483  377634 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:31:14.081641  377634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:31:14.132808  377634 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-07 22:31:14.123546778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:31:14.132917  377634 docker.go:319] overlay module found
	I1207 22:31:14.134771  377634 out.go:179] * Using the docker driver based on user configuration
	I1207 22:31:14.135980  377634 start.go:309] selected driver: docker
	I1207 22:31:14.135998  377634 start.go:927] validating driver "docker" against <nil>
	I1207 22:31:14.136009  377634 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:31:14.136562  377634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:31:14.189081  377634 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-07 22:31:14.180290541 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:31:14.189236  377634 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 22:31:14.189474  377634 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 22:31:14.190977  377634 out.go:179] * Using Docker driver with root privileges
	I1207 22:31:14.191977  377634 cni.go:84] Creating CNI manager for ""
	I1207 22:31:14.192043  377634 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 22:31:14.192055  377634 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 22:31:14.192127  377634 start.go:353] cluster config:
	{Name:addons-007439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-007439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:31:14.193221  377634 out.go:179] * Starting "addons-007439" primary control-plane node in "addons-007439" cluster
	I1207 22:31:14.194141  377634 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1207 22:31:14.195245  377634 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 22:31:14.196309  377634 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1207 22:31:14.196342  377634 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1207 22:31:14.196338  377634 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 22:31:14.196350  377634 cache.go:65] Caching tarball of preloaded images
	I1207 22:31:14.196478  377634 preload.go:238] Found /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1207 22:31:14.196494  377634 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1207 22:31:14.196887  377634 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/config.json ...
	I1207 22:31:14.196918  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/config.json: {Name:mkd711f4a39ae29c1469946698f365f25da8671d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:14.211480  377634 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1207 22:31:14.211584  377634 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1207 22:31:14.211602  377634 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1207 22:31:14.211609  377634 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1207 22:31:14.211619  377634 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1207 22:31:14.211628  377634 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from local cache
	I1207 22:31:26.188448  377634 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from cached tarball
	I1207 22:31:26.188490  377634 cache.go:243] Successfully downloaded all kic artifacts
	I1207 22:31:26.188559  377634 start.go:360] acquireMachinesLock for addons-007439: {Name:mk0a328d31f3ea0280e3d48742116a2ba7bfc2ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 22:31:26.188671  377634 start.go:364] duration metric: took 86.787µs to acquireMachinesLock for "addons-007439"
	I1207 22:31:26.188724  377634 start.go:93] Provisioning new machine with config: &{Name:addons-007439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-007439 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1207 22:31:26.188820  377634 start.go:125] createHost starting for "" (driver="docker")
	I1207 22:31:26.190516  377634 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1207 22:31:26.190774  377634 start.go:159] libmachine.API.Create for "addons-007439" (driver="docker")
	I1207 22:31:26.190816  377634 client.go:173] LocalClient.Create starting
	I1207 22:31:26.190920  377634 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem
	I1207 22:31:26.218644  377634 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/cert.pem
	I1207 22:31:26.283656  377634 cli_runner.go:164] Run: docker network inspect addons-007439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 22:31:26.299907  377634 cli_runner.go:211] docker network inspect addons-007439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 22:31:26.299991  377634 network_create.go:284] running [docker network inspect addons-007439] to gather additional debugging logs...
	I1207 22:31:26.300010  377634 cli_runner.go:164] Run: docker network inspect addons-007439
	W1207 22:31:26.315119  377634 cli_runner.go:211] docker network inspect addons-007439 returned with exit code 1
	I1207 22:31:26.315144  377634 network_create.go:287] error running [docker network inspect addons-007439]: docker network inspect addons-007439: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-007439 not found
	I1207 22:31:26.315159  377634 network_create.go:289] output of [docker network inspect addons-007439]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-007439 not found
	
	** /stderr **
	I1207 22:31:26.315270  377634 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 22:31:26.332214  377634 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014c1df0}
	I1207 22:31:26.332254  377634 network_create.go:124] attempt to create docker network addons-007439 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1207 22:31:26.332295  377634 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-007439 addons-007439
	I1207 22:31:26.375270  377634 network_create.go:108] docker network addons-007439 192.168.49.0/24 created
	I1207 22:31:26.375297  377634 kic.go:121] calculated static IP "192.168.49.2" for the "addons-007439" container
	I1207 22:31:26.375373  377634 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 22:31:26.390820  377634 cli_runner.go:164] Run: docker volume create addons-007439 --label name.minikube.sigs.k8s.io=addons-007439 --label created_by.minikube.sigs.k8s.io=true
	I1207 22:31:26.407217  377634 oci.go:103] Successfully created a docker volume addons-007439
	I1207 22:31:26.407302  377634 cli_runner.go:164] Run: docker run --rm --name addons-007439-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-007439 --entrypoint /usr/bin/test -v addons-007439:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1207 22:31:32.482129  377634 cli_runner.go:217] Completed: docker run --rm --name addons-007439-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-007439 --entrypoint /usr/bin/test -v addons-007439:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (6.074770286s)
	I1207 22:31:32.482166  377634 oci.go:107] Successfully prepared a docker volume addons-007439
	I1207 22:31:32.482240  377634 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1207 22:31:32.482269  377634 kic.go:194] Starting extracting preloaded images to volume ...
	I1207 22:31:32.482342  377634 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-007439:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1207 22:31:36.173057  377634 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-007439:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.690665069s)
	I1207 22:31:36.173093  377634 kic.go:203] duration metric: took 3.690821337s to extract preloaded images to volume ...
	W1207 22:31:36.173207  377634 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1207 22:31:36.173252  377634 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1207 22:31:36.173301  377634 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 22:31:36.225955  377634 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-007439 --name addons-007439 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-007439 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-007439 --network addons-007439 --ip 192.168.49.2 --volume addons-007439:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1207 22:31:36.469349  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Running}}
	I1207 22:31:36.488298  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:36.506043  377634 cli_runner.go:164] Run: docker exec addons-007439 stat /var/lib/dpkg/alternatives/iptables
	I1207 22:31:36.548077  377634 oci.go:144] the created container "addons-007439" has a running status.
	I1207 22:31:36.548114  377634 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa...
	I1207 22:31:36.628425  377634 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 22:31:36.651401  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:36.667493  377634 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 22:31:36.667523  377634 kic_runner.go:114] Args: [docker exec --privileged addons-007439 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 22:31:36.712398  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:36.734452  377634 machine.go:94] provisionDockerMachine start ...
	I1207 22:31:36.734548  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:36.758138  377634 main.go:143] libmachine: Using SSH client type: native
	I1207 22:31:36.758553  377634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I1207 22:31:36.758611  377634 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 22:31:36.759278  377634 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56188->127.0.0.1:33145: read: connection reset by peer
	I1207 22:31:39.883161  377634 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-007439
	
	I1207 22:31:39.883189  377634 ubuntu.go:182] provisioning hostname "addons-007439"
	I1207 22:31:39.883260  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:39.900647  377634 main.go:143] libmachine: Using SSH client type: native
	I1207 22:31:39.900894  377634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I1207 22:31:39.900908  377634 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-007439 && echo "addons-007439" | sudo tee /etc/hostname
	I1207 22:31:40.033247  377634 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-007439
	
	I1207 22:31:40.033328  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:40.051011  377634 main.go:143] libmachine: Using SSH client type: native
	I1207 22:31:40.051212  377634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I1207 22:31:40.051227  377634 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-007439' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-007439/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-007439' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 22:31:40.174809  377634 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 22:31:40.174841  377634 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-372312/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-372312/.minikube}
	I1207 22:31:40.174876  377634 ubuntu.go:190] setting up certificates
	I1207 22:31:40.174887  377634 provision.go:84] configureAuth start
	I1207 22:31:40.174944  377634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-007439
	I1207 22:31:40.192667  377634 provision.go:143] copyHostCerts
	I1207 22:31:40.192762  377634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-372312/.minikube/cert.pem (1123 bytes)
	I1207 22:31:40.192908  377634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-372312/.minikube/key.pem (1679 bytes)
	I1207 22:31:40.193008  377634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-372312/.minikube/ca.pem (1082 bytes)
	I1207 22:31:40.193088  377634 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-372312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca-key.pem org=jenkins.addons-007439 san=[127.0.0.1 192.168.49.2 addons-007439 localhost minikube]
	I1207 22:31:40.290023  377634 provision.go:177] copyRemoteCerts
	I1207 22:31:40.290075  377634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 22:31:40.290107  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:40.306794  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:40.398002  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 22:31:40.415792  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1207 22:31:40.431954  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 22:31:40.448004  377634 provision.go:87] duration metric: took 273.102278ms to configureAuth
	I1207 22:31:40.448026  377634 ubuntu.go:206] setting minikube options for container-runtime
	I1207 22:31:40.448176  377634 config.go:182] Loaded profile config "addons-007439": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 22:31:40.448189  377634 machine.go:97] duration metric: took 3.713716331s to provisionDockerMachine
	I1207 22:31:40.448196  377634 client.go:176] duration metric: took 14.257372279s to LocalClient.Create
	I1207 22:31:40.448213  377634 start.go:167] duration metric: took 14.257441921s to libmachine.API.Create "addons-007439"
	I1207 22:31:40.448223  377634 start.go:293] postStartSetup for "addons-007439" (driver="docker")
	I1207 22:31:40.448231  377634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 22:31:40.448274  377634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 22:31:40.448304  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:40.465194  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:40.558676  377634 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 22:31:40.561924  377634 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 22:31:40.561960  377634 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 22:31:40.561996  377634 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-372312/.minikube/addons for local assets ...
	I1207 22:31:40.562060  377634 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-372312/.minikube/files for local assets ...
	I1207 22:31:40.562088  377634 start.go:296] duration metric: took 113.859744ms for postStartSetup
	I1207 22:31:40.562415  377634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-007439
	I1207 22:31:40.579687  377634 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/config.json ...
	I1207 22:31:40.579935  377634 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 22:31:40.579973  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:40.596102  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:40.684193  377634 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 22:31:40.688530  377634 start.go:128] duration metric: took 14.499693329s to createHost
	I1207 22:31:40.688557  377634 start.go:83] releasing machines lock for "addons-007439", held for 14.499867519s
	I1207 22:31:40.688633  377634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-007439
	I1207 22:31:40.705296  377634 ssh_runner.go:195] Run: cat /version.json
	I1207 22:31:40.705339  377634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 22:31:40.705345  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:40.705396  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:40.722684  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:40.723321  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:40.810214  377634 ssh_runner.go:195] Run: systemctl --version
	I1207 22:31:40.862529  377634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 22:31:40.866876  377634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 22:31:40.866936  377634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 22:31:40.890371  377634 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 22:31:40.890391  377634 start.go:496] detecting cgroup driver to use...
	I1207 22:31:40.890418  377634 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 22:31:40.890461  377634 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1207 22:31:40.903101  377634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1207 22:31:40.914203  377634 docker.go:218] disabling cri-docker service (if available) ...
	I1207 22:31:40.914242  377634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 22:31:40.928602  377634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 22:31:40.943847  377634 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 22:31:41.018127  377634 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 22:31:41.098293  377634 docker.go:234] disabling docker service ...
	I1207 22:31:41.098361  377634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 22:31:41.115634  377634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 22:31:41.126792  377634 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 22:31:41.206925  377634 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 22:31:41.285112  377634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 22:31:41.296219  377634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 22:31:41.309172  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1207 22:31:41.318956  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1207 22:31:41.327167  377634 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1207 22:31:41.327210  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1207 22:31:41.335229  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 22:31:41.343139  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1207 22:31:41.350866  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 22:31:41.358744  377634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 22:31:41.365972  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1207 22:31:41.373717  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1207 22:31:41.381538  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1207 22:31:41.389762  377634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 22:31:41.396424  377634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 22:31:41.403082  377634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:31:41.476754  377634 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1207 22:31:41.570761  377634 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1207 22:31:41.570835  377634 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1207 22:31:41.574635  377634 start.go:564] Will wait 60s for crictl version
	I1207 22:31:41.574678  377634 ssh_runner.go:195] Run: which crictl
	I1207 22:31:41.578062  377634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 22:31:41.600434  377634 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1207 22:31:41.600502  377634 ssh_runner.go:195] Run: containerd --version
	I1207 22:31:41.620447  377634 ssh_runner.go:195] Run: containerd --version
	I1207 22:31:41.642253  377634 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1207 22:31:41.643409  377634 cli_runner.go:164] Run: docker network inspect addons-007439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 22:31:41.659972  377634 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 22:31:41.663594  377634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 22:31:41.673250  377634 kubeadm.go:884] updating cluster {Name:addons-007439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-007439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 22:31:41.673388  377634 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1207 22:31:41.673440  377634 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 22:31:41.696938  377634 containerd.go:627] all images are preloaded for containerd runtime.
	I1207 22:31:41.696956  377634 containerd.go:534] Images already preloaded, skipping extraction
	I1207 22:31:41.696995  377634 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 22:31:41.718868  377634 containerd.go:627] all images are preloaded for containerd runtime.
	I1207 22:31:41.718889  377634 cache_images.go:86] Images are preloaded, skipping loading
	I1207 22:31:41.718898  377634 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 containerd true true} ...
	I1207 22:31:41.718994  377634 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-007439 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-007439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 22:31:41.719040  377634 ssh_runner.go:195] Run: sudo crictl info
	I1207 22:31:41.743660  377634 cni.go:84] Creating CNI manager for ""
	I1207 22:31:41.743690  377634 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 22:31:41.743742  377634 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 22:31:41.743777  377634 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-007439 NodeName:addons-007439 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 22:31:41.743934  377634 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-007439"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 22:31:41.744006  377634 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 22:31:41.751163  377634 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 22:31:41.751210  377634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 22:31:41.758100  377634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1207 22:31:41.769339  377634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 22:31:41.782734  377634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1207 22:31:41.793800  377634 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1207 22:31:41.796975  377634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 22:31:41.805955  377634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:31:41.883376  377634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 22:31:41.904773  377634 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439 for IP: 192.168.49.2
	I1207 22:31:41.904793  377634 certs.go:195] generating shared ca certs ...
	I1207 22:31:41.904812  377634 certs.go:227] acquiring lock for ca certs: {Name:mkfbf8cc1b70595007aee06919ab1fb852308b54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:41.904947  377634 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-372312/.minikube/ca.key
	I1207 22:31:42.130091  377634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt ...
	I1207 22:31:42.130130  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt: {Name:mk8dfbdfb75c8c0479e251862d216908e2b2b3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.130321  377634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-372312/.minikube/ca.key ...
	I1207 22:31:42.130344  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/ca.key: {Name:mk6a57f170ddef037fd2cfec70ffb4115970df50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.130455  377634 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.key
	I1207 22:31:42.166135  377634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.crt ...
	I1207 22:31:42.166156  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.crt: {Name:mk1808bea8f985a935bad71a78e2b0c077dd6970 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.166273  377634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.key ...
	I1207 22:31:42.166283  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.key: {Name:mkdba25eb46af80fa654f22940694d57b073ce1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.166351  377634 certs.go:257] generating profile certs ...
	I1207 22:31:42.166408  377634 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.key
	I1207 22:31:42.166421  377634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt with IP's: []
	I1207 22:31:42.258862  377634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt ...
	I1207 22:31:42.258884  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: {Name:mk47aa93b7952dc03abe35072814a68cf0a9a6ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.259012  377634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.key ...
	I1207 22:31:42.259023  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.key: {Name:mk0dbf413ad77f8502a2b9c12fce4a6e1a599682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.259112  377634 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.key.be306546
	I1207 22:31:42.259133  377634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.crt.be306546 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1207 22:31:42.339199  377634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.crt.be306546 ...
	I1207 22:31:42.339218  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.crt.be306546: {Name:mkce0b61820000a6ea0be206a089dfae591a3b13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.339330  377634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.key.be306546 ...
	I1207 22:31:42.339342  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.key.be306546: {Name:mkac293cbdbd7d1e7dbb5dd59a463a45c1c2fc0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.339408  377634 certs.go:382] copying /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.crt.be306546 -> /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.crt
	I1207 22:31:42.339491  377634 certs.go:386] copying /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.key.be306546 -> /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.key
	I1207 22:31:42.339546  377634 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.key
	I1207 22:31:42.339563  377634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.crt with IP's: []
	I1207 22:31:42.470469  377634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.crt ...
	I1207 22:31:42.470488  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.crt: {Name:mk1ccdb83dc0949c0c9aa38955a9de59a9a129d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.470603  377634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.key ...
	I1207 22:31:42.470614  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.key: {Name:mk2faec993162ad0b712cd77d17256d23a1bcabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.470781  377634 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 22:31:42.470820  377634 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem (1082 bytes)
	I1207 22:31:42.470851  377634 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/cert.pem (1123 bytes)
	I1207 22:31:42.470878  377634 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/key.pem (1679 bytes)
	I1207 22:31:42.471419  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 22:31:42.489246  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 22:31:42.505842  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 22:31:42.522237  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1207 22:31:42.538105  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1207 22:31:42.553666  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 22:31:42.569294  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 22:31:42.585069  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 22:31:42.600618  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 22:31:42.619113  377634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 22:31:42.630420  377634 ssh_runner.go:195] Run: openssl version
	I1207 22:31:42.636124  377634 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:31:42.642720  377634 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 22:31:42.651336  377634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:31:42.654526  377634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:31 /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:31:42.654567  377634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:31:42.687495  377634 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 22:31:42.694211  377634 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 22:31:42.700799  377634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 22:31:42.703946  377634 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 22:31:42.703991  377634 kubeadm.go:401] StartCluster: {Name:addons-007439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-007439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:31:42.704084  377634 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1207 22:31:42.704137  377634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:31:42.729420  377634 cri.go:89] found id: ""
	I1207 22:31:42.729471  377634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 22:31:42.736488  377634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 22:31:42.743600  377634 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 22:31:42.743642  377634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 22:31:42.750528  377634 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 22:31:42.750543  377634 kubeadm.go:158] found existing configuration files:
	
	I1207 22:31:42.750579  377634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 22:31:42.757285  377634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 22:31:42.757332  377634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 22:31:42.763931  377634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 22:31:42.770781  377634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 22:31:42.770832  377634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 22:31:42.777578  377634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 22:31:42.784446  377634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 22:31:42.784501  377634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 22:31:42.791268  377634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 22:31:42.798206  377634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 22:31:42.798247  377634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 22:31:42.805865  377634 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 22:31:42.844360  377634 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1207 22:31:42.844448  377634 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 22:31:42.862864  377634 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 22:31:42.862989  377634 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 22:31:42.863049  377634 kubeadm.go:319] OS: Linux
	I1207 22:31:42.863120  377634 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 22:31:42.863181  377634 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 22:31:42.863254  377634 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 22:31:42.863332  377634 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 22:31:42.863405  377634 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 22:31:42.863481  377634 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 22:31:42.863558  377634 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 22:31:42.863597  377634 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 22:31:42.917066  377634 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 22:31:42.917193  377634 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 22:31:42.917304  377634 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 22:31:42.921884  377634 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 22:31:42.923828  377634 out.go:252]   - Generating certificates and keys ...
	I1207 22:31:42.923918  377634 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 22:31:42.924002  377634 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 22:31:43.049771  377634 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 22:31:43.369053  377634 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 22:31:43.427339  377634 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 22:31:43.502587  377634 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 22:31:43.635435  377634 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 22:31:43.635574  377634 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-007439 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1207 22:31:43.909608  377634 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 22:31:43.909794  377634 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-007439 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1207 22:31:44.142915  377634 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 22:31:44.161353  377634 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 22:31:44.322435  377634 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 22:31:44.322582  377634 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 22:31:44.400575  377634 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 22:31:44.959931  377634 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 22:31:45.280678  377634 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 22:31:45.585157  377634 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 22:31:45.953829  377634 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 22:31:45.954241  377634 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 22:31:45.958871  377634 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 22:31:45.960210  377634 out.go:252]   - Booting up control plane ...
	I1207 22:31:45.960298  377634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 22:31:45.960365  377634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 22:31:45.960975  377634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 22:31:45.974662  377634 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 22:31:45.974823  377634 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 22:31:45.980677  377634 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 22:31:45.980971  377634 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 22:31:45.981021  377634 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 22:31:46.076366  377634 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 22:31:46.076517  377634 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1207 22:31:47.078047  377634 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00177457s
	I1207 22:31:47.081499  377634 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 22:31:47.081631  377634 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1207 22:31:47.081751  377634 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 22:31:47.081831  377634 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 22:31:48.357388  377634 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.275763293s
	I1207 22:31:48.843156  377634 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.761626171s
	I1207 22:31:50.582956  377634 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501331108s
	I1207 22:31:50.600935  377634 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 22:31:50.609203  377634 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 22:31:50.617404  377634 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 22:31:50.617634  377634 kubeadm.go:319] [mark-control-plane] Marking the node addons-007439 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 22:31:50.625104  377634 kubeadm.go:319] [bootstrap-token] Using token: izaz1b.uyzwo916dp9ka22u
	I1207 22:31:50.626539  377634 out.go:252]   - Configuring RBAC rules ...
	I1207 22:31:50.626688  377634 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 22:31:50.629435  377634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 22:31:50.633988  377634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 22:31:50.636884  377634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 22:31:50.638869  377634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 22:31:50.640996  377634 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 22:31:50.988581  377634 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 22:31:51.400758  377634 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1207 22:31:51.988831  377634 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1207 22:31:51.989575  377634 kubeadm.go:319] 
	I1207 22:31:51.989637  377634 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1207 22:31:51.989647  377634 kubeadm.go:319] 
	I1207 22:31:51.989791  377634 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1207 22:31:51.989817  377634 kubeadm.go:319] 
	I1207 22:31:51.989858  377634 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1207 22:31:51.989965  377634 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 22:31:51.990047  377634 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 22:31:51.990057  377634 kubeadm.go:319] 
	I1207 22:31:51.990129  377634 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1207 22:31:51.990138  377634 kubeadm.go:319] 
	I1207 22:31:51.990207  377634 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 22:31:51.990216  377634 kubeadm.go:319] 
	I1207 22:31:51.990296  377634 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1207 22:31:51.990407  377634 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 22:31:51.990490  377634 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 22:31:51.990507  377634 kubeadm.go:319] 
	I1207 22:31:51.990644  377634 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 22:31:51.990793  377634 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1207 22:31:51.990804  377634 kubeadm.go:319] 
	I1207 22:31:51.990918  377634 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token izaz1b.uyzwo916dp9ka22u \
	I1207 22:31:51.991066  377634 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:85ac6ff048e2b2646b65f112736b58cb81d3dd5c10b3ad35b3c1aa0187f127e5 \
	I1207 22:31:51.991099  377634 kubeadm.go:319] 	--control-plane 
	I1207 22:31:51.991107  377634 kubeadm.go:319] 
	I1207 22:31:51.991174  377634 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1207 22:31:51.991179  377634 kubeadm.go:319] 
	I1207 22:31:51.991248  377634 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token izaz1b.uyzwo916dp9ka22u \
	I1207 22:31:51.991329  377634 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:85ac6ff048e2b2646b65f112736b58cb81d3dd5c10b3ad35b3c1aa0187f127e5 
	I1207 22:31:51.993582  377634 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1207 22:31:51.993688  377634 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 22:31:51.993738  377634 cni.go:84] Creating CNI manager for ""
	I1207 22:31:51.993752  377634 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 22:31:51.996153  377634 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1207 22:31:51.997267  377634 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1207 22:31:52.001486  377634 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1207 22:31:52.001501  377634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1207 22:31:52.014187  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 22:31:52.205777  377634 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 22:31:52.205858  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:52.205881  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-007439 minikube.k8s.io/updated_at=2025_12_07T22_31_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=addons-007439 minikube.k8s.io/primary=true
	I1207 22:31:52.295296  377634 ops.go:34] apiserver oom_adj: -16
	I1207 22:31:52.295475  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:52.795974  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:53.295942  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:53.796173  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:54.295582  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:54.795800  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:55.296147  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:55.796072  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:55.858788  377634 kubeadm.go:1114] duration metric: took 3.652991512s to wait for elevateKubeSystemPrivileges
	I1207 22:31:55.858822  377634 kubeadm.go:403] duration metric: took 13.154834236s to StartCluster
	I1207 22:31:55.858846  377634 settings.go:142] acquiring lock: {Name:mk7216554e80629b12e2915c4ff3500dbf3bd589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:55.858960  377634 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 22:31:55.859369  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/kubeconfig: {Name:mkfa57fc45c954633f03c454814d9663bdf56b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:55.859582  377634 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 22:31:55.859621  377634 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1207 22:31:55.859663  377634 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1207 22:31:55.859814  377634 addons.go:70] Setting yakd=true in profile "addons-007439"
	I1207 22:31:55.859832  377634 addons.go:70] Setting default-storageclass=true in profile "addons-007439"
	I1207 22:31:55.859847  377634 addons.go:70] Setting registry-creds=true in profile "addons-007439"
	I1207 22:31:55.859859  377634 addons.go:239] Setting addon registry-creds=true in "addons-007439"
	I1207 22:31:55.859862  377634 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-007439"
	I1207 22:31:55.859861  377634 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-007439"
	I1207 22:31:55.859885  377634 addons.go:70] Setting inspektor-gadget=true in profile "addons-007439"
	I1207 22:31:55.859894  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859906  377634 addons.go:70] Setting ingress-dns=true in profile "addons-007439"
	I1207 22:31:55.859921  377634 addons.go:239] Setting addon ingress-dns=true in "addons-007439"
	I1207 22:31:55.859838  377634 addons.go:239] Setting addon yakd=true in "addons-007439"
	I1207 22:31:55.859951  377634 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-007439"
	I1207 22:31:55.859956  377634 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-007439"
	I1207 22:31:55.859965  377634 addons.go:70] Setting storage-provisioner=true in profile "addons-007439"
	I1207 22:31:55.859966  377634 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-007439"
	I1207 22:31:55.859976  377634 addons.go:239] Setting addon storage-provisioner=true in "addons-007439"
	I1207 22:31:55.859986  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859990  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859990  377634 addons.go:70] Setting volcano=true in profile "addons-007439"
	I1207 22:31:55.859994  377634 addons.go:70] Setting cloud-spanner=true in profile "addons-007439"
	I1207 22:31:55.859869  377634 config.go:182] Loaded profile config "addons-007439": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 22:31:55.860003  377634 addons.go:239] Setting addon volcano=true in "addons-007439"
	I1207 22:31:55.860013  377634 addons.go:239] Setting addon cloud-spanner=true in "addons-007439"
	I1207 22:31:55.860029  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.860042  377634 addons.go:70] Setting volumesnapshots=true in profile "addons-007439"
	I1207 22:31:55.859993  377634 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-007439"
	I1207 22:31:55.860079  377634 addons.go:239] Setting addon volumesnapshots=true in "addons-007439"
	I1207 22:31:55.860096  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.860101  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.860236  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.859898  377634 addons.go:239] Setting addon inspektor-gadget=true in "addons-007439"
	I1207 22:31:55.860307  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.860407  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.860478  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.860506  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.860542  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.859957  377634 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-007439"
	I1207 22:31:55.860609  377634 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-007439"
	I1207 22:31:55.860646  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.860786  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.860891  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.859984  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.860030  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.861973  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.862114  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.860407  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.862180  377634 out.go:179] * Verifying Kubernetes components...
	I1207 22:31:55.859957  377634 addons.go:70] Setting metrics-server=true in profile "addons-007439"
	I1207 22:31:55.862307  377634 addons.go:239] Setting addon metrics-server=true in "addons-007439"
	I1207 22:31:55.862335  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859819  377634 addons.go:70] Setting ingress=true in profile "addons-007439"
	I1207 22:31:55.862420  377634 addons.go:239] Setting addon ingress=true in "addons-007439"
	I1207 22:31:55.862466  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859947  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859971  377634 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-007439"
	I1207 22:31:55.862749  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859909  377634 addons.go:70] Setting registry=true in profile "addons-007439"
	I1207 22:31:55.862967  377634 addons.go:239] Setting addon registry=true in "addons-007439"
	I1207 22:31:55.863000  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859946  377634 addons.go:70] Setting gcp-auth=true in profile "addons-007439"
	I1207 22:31:55.863158  377634 mustload.go:66] Loading cluster: addons-007439
	I1207 22:31:55.863634  377634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:31:55.868356  377634 config.go:182] Loaded profile config "addons-007439": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 22:31:55.869009  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.871345  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.871956  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.872478  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.875249  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.875893  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.898872  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1207 22:31:55.899035  377634 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1207 22:31:55.899229  377634 addons.go:239] Setting addon default-storageclass=true in "addons-007439"
	I1207 22:31:55.899281  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.900139  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.900393  377634 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1207 22:31:55.900410  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1207 22:31:55.900458  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.901619  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1207 22:31:55.902534  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1207 22:31:55.904317  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1207 22:31:55.906166  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1207 22:31:55.907183  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1207 22:31:55.909645  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1207 22:31:55.910536  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1207 22:31:55.912687  377634 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1207 22:31:55.912773  377634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1207 22:31:55.912922  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.935727  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1207 22:31:55.936863  377634 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1207 22:31:55.936931  377634 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1207 22:31:55.937040  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.938374  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.948529  377634 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 22:31:55.948670  377634 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1207 22:31:55.949967  377634 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1207 22:31:55.950026  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1207 22:31:55.950131  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.950678  377634 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 22:31:55.950859  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 22:31:55.950914  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.962416  377634 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-007439"
	I1207 22:31:55.962473  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.966628  377634 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1207 22:31:55.966688  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.967492  377634 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1207 22:31:55.969441  377634 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1207 22:31:55.971487  377634 out.go:179]   - Using image docker.io/registry:3.0.0
	I1207 22:31:55.971539  377634 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1207 22:31:55.971555  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1207 22:31:55.971598  377634 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1207 22:31:55.971609  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1207 22:31:55.971612  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.971660  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.971886  377634 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1207 22:31:55.972439  377634 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1207 22:31:55.972452  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1207 22:31:55.972496  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.972989  377634 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1207 22:31:55.973007  377634 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1207 22:31:55.973068  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.974069  377634 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1207 22:31:55.974969  377634 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1207 22:31:55.974986  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1207 22:31:55.975029  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.977295  377634 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1207 22:31:55.980746  377634 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1207 22:31:55.982082  377634 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1207 22:31:55.983148  377634 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1207 22:31:55.984057  377634 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1207 22:31:55.984439  377634 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1207 22:31:55.985186  377634 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 22:31:55.985204  377634 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 22:31:55.985262  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.985976  377634 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1207 22:31:55.985989  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1207 22:31:55.986044  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.989511  377634 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1207 22:31:56.000662  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.002832  377634 addons.go:436] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1207 22:31:56.002860  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1207 22:31:56.002931  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:56.008834  377634 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 22:31:56.008856  377634 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 22:31:56.008919  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:56.019133  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.019391  377634 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 22:31:56.021090  377634 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1207 22:31:56.022268  377634 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1207 22:31:56.022335  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1207 22:31:56.022423  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:56.026394  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.028339  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.031562  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.037000  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.060489  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.065924  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.066572  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.070994  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.071476  377634 out.go:179]   - Using image docker.io/busybox:stable
	I1207 22:31:56.072753  377634 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1207 22:31:56.072939  377634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 22:31:56.073774  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.074003  377634 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1207 22:31:56.074023  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1207 22:31:56.074076  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:56.074680  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.076648  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	W1207 22:31:56.077126  377634 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1207 22:31:56.077153  377634 retry.go:31] will retry after 139.18215ms: ssh: handshake failed: EOF
	W1207 22:31:56.078110  377634 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1207 22:31:56.078132  377634 retry.go:31] will retry after 281.656254ms: ssh: handshake failed: EOF
	W1207 22:31:56.079315  377634 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1207 22:31:56.079334  377634 retry.go:31] will retry after 211.601156ms: ssh: handshake failed: EOF
	I1207 22:31:56.086432  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.093777  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	W1207 22:31:56.097728  377634 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1207 22:31:56.097750  377634 retry.go:31] will retry after 139.387321ms: ssh: handshake failed: EOF
	I1207 22:31:56.106146  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.165612  377634 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1207 22:31:56.165635  377634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1207 22:31:56.173159  377634 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1207 22:31:56.173185  377634 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1207 22:31:56.180931  377634 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1207 22:31:56.180955  377634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1207 22:31:56.190950  377634 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1207 22:31:56.190970  377634 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1207 22:31:56.191007  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1207 22:31:56.194735  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 22:31:56.195044  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1207 22:31:56.204753  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1207 22:31:56.208926  377634 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1207 22:31:56.209000  377634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1207 22:31:56.214174  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1207 22:31:56.215725  377634 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1207 22:31:56.215746  377634 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1207 22:31:56.217842  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1207 22:31:56.222733  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1207 22:31:56.230262  377634 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1207 22:31:56.230284  377634 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1207 22:31:56.232242  377634 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1207 22:31:56.232285  377634 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1207 22:31:56.243923  377634 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1207 22:31:56.243968  377634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1207 22:31:56.247826  377634 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1207 22:31:56.247853  377634 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1207 22:31:56.256272  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1207 22:31:56.272404  377634 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1207 22:31:56.272431  377634 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1207 22:31:56.274678  377634 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1207 22:31:56.274713  377634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1207 22:31:56.290791  377634 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 22:31:56.290820  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1207 22:31:56.301498  377634 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1207 22:31:56.301527  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1207 22:31:56.320178  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 22:31:56.340683  377634 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1207 22:31:56.340735  377634 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1207 22:31:56.348774  377634 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1207 22:31:56.348860  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1207 22:31:56.352926  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1207 22:31:56.391201  377634 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1207 22:31:56.391227  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1207 22:31:56.391358  377634 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1207 22:31:56.391388  377634 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1207 22:31:56.421965  377634 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 22:31:56.421992  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1207 22:31:56.429207  377634 node_ready.go:35] waiting up to 6m0s for node "addons-007439" to be "Ready" ...
	I1207 22:31:56.429561  377634 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1207 22:31:56.451140  377634 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1207 22:31:56.451167  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1207 22:31:56.463519  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1207 22:31:56.484183  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1207 22:31:56.487099  377634 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 22:31:56.487126  377634 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 22:31:56.515981  377634 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1207 22:31:56.516013  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1207 22:31:56.520805  377634 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 22:31:56.520834  377634 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 22:31:56.525493  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1207 22:31:56.573406  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 22:31:56.580258  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 22:31:56.642559  377634 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1207 22:31:56.642594  377634 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1207 22:31:56.681563  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1207 22:31:56.936397  377634 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-007439" context rescaled to 1 replicas
	I1207 22:31:57.597447  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.37467948s)
	I1207 22:31:57.597480  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.379601935s)
	I1207 22:31:57.597511  377634 addons.go:495] Verifying addon ingress=true in "addons-007439"
	I1207 22:31:57.597547  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.341243964s)
	I1207 22:31:57.599097  377634 out.go:179] * Verifying ingress addon...
	I1207 22:31:57.601480  377634 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1207 22:31:57.604541  377634 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1207 22:31:57.827744  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.507515s)
	W1207 22:31:57.827792  377634 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1207 22:31:57.827817  377634 retry.go:31] will retry after 286.656289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1207 22:31:57.827843  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.474874637s)
	I1207 22:31:57.827905  377634 addons.go:495] Verifying addon registry=true in "addons-007439"
	I1207 22:31:57.827938  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.364385097s)
	I1207 22:31:57.828007  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.343785726s)
	I1207 22:31:57.829419  377634 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-007439 service yakd-dashboard -n yakd-dashboard
	
	I1207 22:31:57.829428  377634 out.go:179] * Verifying registry addon...
	I1207 22:31:57.831322  377634 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1207 22:31:57.834270  377634 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1207 22:31:57.834280  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:31:58.106720  377634 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1207 22:31:58.106749  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:31:58.114637  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 22:31:58.340149  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (1.814618285s)
	I1207 22:31:58.340266  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.766829512s)
	I1207 22:31:58.340287  377634 addons.go:495] Verifying addon metrics-server=true in "addons-007439"
	I1207 22:31:58.340332  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.760048048s)
	I1207 22:31:58.340734  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.659101212s)
	I1207 22:31:58.340774  377634 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-007439"
	I1207 22:31:58.342015  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:31:58.342386  377634 out.go:179] * Verifying csi-hostpath-driver addon...
	I1207 22:31:58.344647  377634 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1207 22:31:58.349359  377634 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1207 22:31:58.349381  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1207 22:31:58.432076  377634 node_ready.go:57] node "addons-007439" has "Ready":"False" status (will retry)
	I1207 22:31:58.605689  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:31:58.833930  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:31:58.846809  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:31:59.104833  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:31:59.334647  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:31:59.435997  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:31:59.605290  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:31:59.833597  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:31:59.847689  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:00.105488  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:00.333786  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:00.434377  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:00.605149  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:00.681062  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.566375933s)
	I1207 22:32:00.833911  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:00.846788  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1207 22:32:00.931865  377634 node_ready.go:57] node "addons-007439" has "Ready":"False" status (will retry)
	I1207 22:32:01.104658  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:01.334741  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:01.435717  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:01.604183  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:01.834410  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:01.847319  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:02.105145  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:02.334660  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:02.434965  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:02.604534  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:02.834314  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:02.847546  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:03.104666  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:03.334520  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1207 22:32:03.431302  377634 node_ready.go:57] node "addons-007439" has "Ready":"False" status (will retry)
	I1207 22:32:03.435188  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:03.545404  377634 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1207 22:32:03.545478  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:32:03.562455  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:32:03.605382  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:03.660272  377634 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1207 22:32:03.672046  377634 addons.go:239] Setting addon gcp-auth=true in "addons-007439"
	I1207 22:32:03.672097  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:32:03.672450  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:32:03.689008  377634 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1207 22:32:03.689057  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:32:03.705716  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:32:03.795424  377634 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1207 22:32:03.796647  377634 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1207 22:32:03.797799  377634 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1207 22:32:03.797817  377634 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1207 22:32:03.810327  377634 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1207 22:32:03.810349  377634 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1207 22:32:03.822522  377634 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1207 22:32:03.822541  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1207 22:32:03.833962  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:03.834960  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1207 22:32:03.847291  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:04.104516  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:04.130667  377634 addons.go:495] Verifying addon gcp-auth=true in "addons-007439"
	I1207 22:32:04.131903  377634 out.go:179] * Verifying gcp-auth addon...
	I1207 22:32:04.135414  377634 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1207 22:32:04.204719  377634 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1207 22:32:04.204736  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:04.333926  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:04.346881  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:04.604254  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:04.638337  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:04.833745  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:04.846689  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:05.104848  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:05.137887  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:05.334373  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:05.347396  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:05.605427  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:05.638493  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:05.833925  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:05.847042  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1207 22:32:05.932127  377634 node_ready.go:57] node "addons-007439" has "Ready":"False" status (will retry)
	I1207 22:32:06.105066  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:06.138110  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:06.334385  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:06.347290  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:06.604087  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:06.637881  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:06.834122  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:06.847136  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:07.104057  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:07.138143  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:07.334832  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:07.346679  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:07.604984  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:07.638029  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:07.834591  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:07.847688  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:08.104254  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:08.138452  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:08.333827  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:08.346889  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1207 22:32:08.432063  377634 node_ready.go:57] node "addons-007439" has "Ready":"False" status (will retry)
	I1207 22:32:08.605098  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:08.642095  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:08.833496  377634 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1207 22:32:08.833514  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:08.847703  377634 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1207 22:32:08.847721  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:08.932985  377634 node_ready.go:49] node "addons-007439" is "Ready"
	I1207 22:32:08.933018  377634 node_ready.go:38] duration metric: took 12.503775231s for node "addons-007439" to be "Ready" ...
	I1207 22:32:08.933036  377634 api_server.go:52] waiting for apiserver process to appear ...
	I1207 22:32:08.933089  377634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 22:32:08.955562  377634 api_server.go:72] duration metric: took 13.095896356s to wait for apiserver process to appear ...
	I1207 22:32:08.955610  377634 api_server.go:88] waiting for apiserver healthz status ...
	I1207 22:32:08.955637  377634 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1207 22:32:08.962732  377634 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1207 22:32:08.964159  377634 api_server.go:141] control plane version: v1.34.2
	I1207 22:32:08.964194  377634 api_server.go:131] duration metric: took 8.574483ms to wait for apiserver health ...
	I1207 22:32:08.964207  377634 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 22:32:08.970744  377634 system_pods.go:59] 20 kube-system pods found
	I1207 22:32:08.970859  377634 system_pods.go:61] "amd-gpu-device-plugin-vxzwd" [a0965c42-1d83-4d06-a17b-c97d11433f73] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:32:08.970874  377634 system_pods.go:61] "coredns-66bc5c9577-qv27l" [b54c5507-48ac-46cd-a418-ad889f0c4081] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:32:08.970885  377634 system_pods.go:61] "csi-hostpath-attacher-0" [ad3bde9a-0070-4465-afac-ee4c7efa4317] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:32:08.970893  377634 system_pods.go:61] "csi-hostpath-resizer-0" [a94e96e8-fca6-4a06-8765-13d2ef93a8da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:32:08.970902  377634 system_pods.go:61] "csi-hostpathplugin-rsn9j" [f690cddd-5270-404d-bd20-3beaf2f88e57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:32:08.970921  377634 system_pods.go:61] "etcd-addons-007439" [2632aa09-8415-49c7-a35f-956cd4d45c1a] Running
	I1207 22:32:08.970926  377634 system_pods.go:61] "kindnet-8t6g6" [2803a223-e9a2-4daa-9b70-2a094a8bfcf9] Running
	I1207 22:32:08.970931  377634 system_pods.go:61] "kube-apiserver-addons-007439" [33792473-a7d5-4e88-8255-f807cad0f40c] Running
	I1207 22:32:08.970936  377634 system_pods.go:61] "kube-controller-manager-addons-007439" [09d14a1d-243d-474b-a662-3178f0387ad2] Running
	I1207 22:32:08.970944  377634 system_pods.go:61] "kube-ingress-dns-minikube" [b977131b-fcf0-4612-af35-8b9e761cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:32:08.970950  377634 system_pods.go:61] "kube-proxy-724nh" [d18f83a2-2b43-4c5b-b426-35611913b0fe] Running
	I1207 22:32:08.970956  377634 system_pods.go:61] "kube-scheduler-addons-007439" [ff7ef5b1-4778-4a9e-8e65-25eb520a768d] Running
	I1207 22:32:08.970963  377634 system_pods.go:61] "metrics-server-85b7d694d7-xclmv" [127d9996-068b-4d23-8215-895807a75987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:32:08.970971  377634 system_pods.go:61] "nvidia-device-plugin-daemonset-ntq2v" [0b9135a9-fc92-4f62-9720-da68d90dcd42] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:32:08.970979  377634 system_pods.go:61] "registry-6b586f9694-6p6bs" [0a78b775-c4c2-437d-8f4d-28a45858c31d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:32:08.970992  377634 system_pods.go:61] "registry-creds-764b6fb674-khbsn" [2604cd75-b452-415c-a35a-1f782ae157c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:32:08.971004  377634 system_pods.go:61] "registry-proxy-7n2kv" [03a65721-16eb-45cd-9dda-3c1a53f037f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:32:08.971015  377634 system_pods.go:61] "snapshot-controller-7d9fbc56b8-86qjb" [0ed800fc-182e-493f-aacb-7fb2337b1f90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:08.971023  377634 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wzrfv" [81da8183-509c-4ce8-89dd-c5890c19d49c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:08.971031  377634 system_pods.go:61] "storage-provisioner" [62bc9461-6c42-46bb-829b-86b3cbd143b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:32:08.971039  377634 system_pods.go:74] duration metric: took 6.824307ms to wait for pod list to return data ...
	I1207 22:32:08.971051  377634 default_sa.go:34] waiting for default service account to be created ...
	I1207 22:32:08.974302  377634 default_sa.go:45] found service account: "default"
	I1207 22:32:08.974352  377634 default_sa.go:55] duration metric: took 3.293515ms for default service account to be created ...
	I1207 22:32:08.974374  377634 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 22:32:08.980352  377634 system_pods.go:86] 20 kube-system pods found
	I1207 22:32:08.980425  377634 system_pods.go:89] "amd-gpu-device-plugin-vxzwd" [a0965c42-1d83-4d06-a17b-c97d11433f73] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:32:08.980447  377634 system_pods.go:89] "coredns-66bc5c9577-qv27l" [b54c5507-48ac-46cd-a418-ad889f0c4081] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:32:08.980466  377634 system_pods.go:89] "csi-hostpath-attacher-0" [ad3bde9a-0070-4465-afac-ee4c7efa4317] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:32:08.980484  377634 system_pods.go:89] "csi-hostpath-resizer-0" [a94e96e8-fca6-4a06-8765-13d2ef93a8da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:32:08.980518  377634 system_pods.go:89] "csi-hostpathplugin-rsn9j" [f690cddd-5270-404d-bd20-3beaf2f88e57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:32:08.980539  377634 system_pods.go:89] "etcd-addons-007439" [2632aa09-8415-49c7-a35f-956cd4d45c1a] Running
	I1207 22:32:08.980561  377634 system_pods.go:89] "kindnet-8t6g6" [2803a223-e9a2-4daa-9b70-2a094a8bfcf9] Running
	I1207 22:32:08.980578  377634 system_pods.go:89] "kube-apiserver-addons-007439" [33792473-a7d5-4e88-8255-f807cad0f40c] Running
	I1207 22:32:08.980594  377634 system_pods.go:89] "kube-controller-manager-addons-007439" [09d14a1d-243d-474b-a662-3178f0387ad2] Running
	I1207 22:32:08.980610  377634 system_pods.go:89] "kube-ingress-dns-minikube" [b977131b-fcf0-4612-af35-8b9e761cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:32:08.980625  377634 system_pods.go:89] "kube-proxy-724nh" [d18f83a2-2b43-4c5b-b426-35611913b0fe] Running
	I1207 22:32:08.980646  377634 system_pods.go:89] "kube-scheduler-addons-007439" [ff7ef5b1-4778-4a9e-8e65-25eb520a768d] Running
	I1207 22:32:08.980664  377634 system_pods.go:89] "metrics-server-85b7d694d7-xclmv" [127d9996-068b-4d23-8215-895807a75987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:32:08.980680  377634 system_pods.go:89] "nvidia-device-plugin-daemonset-ntq2v" [0b9135a9-fc92-4f62-9720-da68d90dcd42] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:32:08.980717  377634 system_pods.go:89] "registry-6b586f9694-6p6bs" [0a78b775-c4c2-437d-8f4d-28a45858c31d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:32:08.980750  377634 system_pods.go:89] "registry-creds-764b6fb674-khbsn" [2604cd75-b452-415c-a35a-1f782ae157c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:32:08.980770  377634 system_pods.go:89] "registry-proxy-7n2kv" [03a65721-16eb-45cd-9dda-3c1a53f037f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:32:08.980787  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86qjb" [0ed800fc-182e-493f-aacb-7fb2337b1f90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:08.980804  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzrfv" [81da8183-509c-4ce8-89dd-c5890c19d49c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:08.980826  377634 system_pods.go:89] "storage-provisioner" [62bc9461-6c42-46bb-829b-86b3cbd143b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:32:08.980857  377634 retry.go:31] will retry after 208.825674ms: missing components: kube-dns
	I1207 22:32:09.105639  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:09.140036  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:09.199864  377634 system_pods.go:86] 20 kube-system pods found
	I1207 22:32:09.199914  377634 system_pods.go:89] "amd-gpu-device-plugin-vxzwd" [a0965c42-1d83-4d06-a17b-c97d11433f73] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:32:09.199946  377634 system_pods.go:89] "coredns-66bc5c9577-qv27l" [b54c5507-48ac-46cd-a418-ad889f0c4081] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:32:09.199959  377634 system_pods.go:89] "csi-hostpath-attacher-0" [ad3bde9a-0070-4465-afac-ee4c7efa4317] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:32:09.199977  377634 system_pods.go:89] "csi-hostpath-resizer-0" [a94e96e8-fca6-4a06-8765-13d2ef93a8da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:32:09.199987  377634 system_pods.go:89] "csi-hostpathplugin-rsn9j" [f690cddd-5270-404d-bd20-3beaf2f88e57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:32:09.199994  377634 system_pods.go:89] "etcd-addons-007439" [2632aa09-8415-49c7-a35f-956cd4d45c1a] Running
	I1207 22:32:09.200006  377634 system_pods.go:89] "kindnet-8t6g6" [2803a223-e9a2-4daa-9b70-2a094a8bfcf9] Running
	I1207 22:32:09.200020  377634 system_pods.go:89] "kube-apiserver-addons-007439" [33792473-a7d5-4e88-8255-f807cad0f40c] Running
	I1207 22:32:09.200027  377634 system_pods.go:89] "kube-controller-manager-addons-007439" [09d14a1d-243d-474b-a662-3178f0387ad2] Running
	I1207 22:32:09.200041  377634 system_pods.go:89] "kube-ingress-dns-minikube" [b977131b-fcf0-4612-af35-8b9e761cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:32:09.200047  377634 system_pods.go:89] "kube-proxy-724nh" [d18f83a2-2b43-4c5b-b426-35611913b0fe] Running
	I1207 22:32:09.200054  377634 system_pods.go:89] "kube-scheduler-addons-007439" [ff7ef5b1-4778-4a9e-8e65-25eb520a768d] Running
	I1207 22:32:09.200062  377634 system_pods.go:89] "metrics-server-85b7d694d7-xclmv" [127d9996-068b-4d23-8215-895807a75987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:32:09.200075  377634 system_pods.go:89] "nvidia-device-plugin-daemonset-ntq2v" [0b9135a9-fc92-4f62-9720-da68d90dcd42] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:32:09.200084  377634 system_pods.go:89] "registry-6b586f9694-6p6bs" [0a78b775-c4c2-437d-8f4d-28a45858c31d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:32:09.200096  377634 system_pods.go:89] "registry-creds-764b6fb674-khbsn" [2604cd75-b452-415c-a35a-1f782ae157c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:32:09.200105  377634 system_pods.go:89] "registry-proxy-7n2kv" [03a65721-16eb-45cd-9dda-3c1a53f037f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:32:09.200115  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86qjb" [0ed800fc-182e-493f-aacb-7fb2337b1f90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:09.200129  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzrfv" [81da8183-509c-4ce8-89dd-c5890c19d49c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:09.200140  377634 system_pods.go:89] "storage-provisioner" [62bc9461-6c42-46bb-829b-86b3cbd143b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:32:09.200160  377634 retry.go:31] will retry after 328.124289ms: missing components: kube-dns
	I1207 22:32:09.337143  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:09.438022  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:09.539152  377634 system_pods.go:86] 20 kube-system pods found
	I1207 22:32:09.539184  377634 system_pods.go:89] "amd-gpu-device-plugin-vxzwd" [a0965c42-1d83-4d06-a17b-c97d11433f73] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:32:09.539191  377634 system_pods.go:89] "coredns-66bc5c9577-qv27l" [b54c5507-48ac-46cd-a418-ad889f0c4081] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:32:09.539198  377634 system_pods.go:89] "csi-hostpath-attacher-0" [ad3bde9a-0070-4465-afac-ee4c7efa4317] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:32:09.539204  377634 system_pods.go:89] "csi-hostpath-resizer-0" [a94e96e8-fca6-4a06-8765-13d2ef93a8da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:32:09.539209  377634 system_pods.go:89] "csi-hostpathplugin-rsn9j" [f690cddd-5270-404d-bd20-3beaf2f88e57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:32:09.539214  377634 system_pods.go:89] "etcd-addons-007439" [2632aa09-8415-49c7-a35f-956cd4d45c1a] Running
	I1207 22:32:09.539218  377634 system_pods.go:89] "kindnet-8t6g6" [2803a223-e9a2-4daa-9b70-2a094a8bfcf9] Running
	I1207 22:32:09.539223  377634 system_pods.go:89] "kube-apiserver-addons-007439" [33792473-a7d5-4e88-8255-f807cad0f40c] Running
	I1207 22:32:09.539226  377634 system_pods.go:89] "kube-controller-manager-addons-007439" [09d14a1d-243d-474b-a662-3178f0387ad2] Running
	I1207 22:32:09.539232  377634 system_pods.go:89] "kube-ingress-dns-minikube" [b977131b-fcf0-4612-af35-8b9e761cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:32:09.539238  377634 system_pods.go:89] "kube-proxy-724nh" [d18f83a2-2b43-4c5b-b426-35611913b0fe] Running
	I1207 22:32:09.539242  377634 system_pods.go:89] "kube-scheduler-addons-007439" [ff7ef5b1-4778-4a9e-8e65-25eb520a768d] Running
	I1207 22:32:09.539250  377634 system_pods.go:89] "metrics-server-85b7d694d7-xclmv" [127d9996-068b-4d23-8215-895807a75987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:32:09.539259  377634 system_pods.go:89] "nvidia-device-plugin-daemonset-ntq2v" [0b9135a9-fc92-4f62-9720-da68d90dcd42] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:32:09.539263  377634 system_pods.go:89] "registry-6b586f9694-6p6bs" [0a78b775-c4c2-437d-8f4d-28a45858c31d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:32:09.539269  377634 system_pods.go:89] "registry-creds-764b6fb674-khbsn" [2604cd75-b452-415c-a35a-1f782ae157c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:32:09.539275  377634 system_pods.go:89] "registry-proxy-7n2kv" [03a65721-16eb-45cd-9dda-3c1a53f037f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:32:09.539280  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86qjb" [0ed800fc-182e-493f-aacb-7fb2337b1f90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:09.539288  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzrfv" [81da8183-509c-4ce8-89dd-c5890c19d49c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:09.539293  377634 system_pods.go:89] "storage-provisioner" [62bc9461-6c42-46bb-829b-86b3cbd143b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:32:09.539308  377634 retry.go:31] will retry after 332.262547ms: missing components: kube-dns
	I1207 22:32:09.604857  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:09.637744  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:09.834471  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:09.847800  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:09.874557  377634 system_pods.go:86] 20 kube-system pods found
	I1207 22:32:09.874588  377634 system_pods.go:89] "amd-gpu-device-plugin-vxzwd" [a0965c42-1d83-4d06-a17b-c97d11433f73] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:32:09.874595  377634 system_pods.go:89] "coredns-66bc5c9577-qv27l" [b54c5507-48ac-46cd-a418-ad889f0c4081] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:32:09.874602  377634 system_pods.go:89] "csi-hostpath-attacher-0" [ad3bde9a-0070-4465-afac-ee4c7efa4317] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:32:09.874608  377634 system_pods.go:89] "csi-hostpath-resizer-0" [a94e96e8-fca6-4a06-8765-13d2ef93a8da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:32:09.874613  377634 system_pods.go:89] "csi-hostpathplugin-rsn9j" [f690cddd-5270-404d-bd20-3beaf2f88e57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:32:09.874618  377634 system_pods.go:89] "etcd-addons-007439" [2632aa09-8415-49c7-a35f-956cd4d45c1a] Running
	I1207 22:32:09.874623  377634 system_pods.go:89] "kindnet-8t6g6" [2803a223-e9a2-4daa-9b70-2a094a8bfcf9] Running
	I1207 22:32:09.874629  377634 system_pods.go:89] "kube-apiserver-addons-007439" [33792473-a7d5-4e88-8255-f807cad0f40c] Running
	I1207 22:32:09.874634  377634 system_pods.go:89] "kube-controller-manager-addons-007439" [09d14a1d-243d-474b-a662-3178f0387ad2] Running
	I1207 22:32:09.874641  377634 system_pods.go:89] "kube-ingress-dns-minikube" [b977131b-fcf0-4612-af35-8b9e761cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:32:09.874645  377634 system_pods.go:89] "kube-proxy-724nh" [d18f83a2-2b43-4c5b-b426-35611913b0fe] Running
	I1207 22:32:09.874649  377634 system_pods.go:89] "kube-scheduler-addons-007439" [ff7ef5b1-4778-4a9e-8e65-25eb520a768d] Running
	I1207 22:32:09.874657  377634 system_pods.go:89] "metrics-server-85b7d694d7-xclmv" [127d9996-068b-4d23-8215-895807a75987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:32:09.874663  377634 system_pods.go:89] "nvidia-device-plugin-daemonset-ntq2v" [0b9135a9-fc92-4f62-9720-da68d90dcd42] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:32:09.874671  377634 system_pods.go:89] "registry-6b586f9694-6p6bs" [0a78b775-c4c2-437d-8f4d-28a45858c31d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:32:09.874676  377634 system_pods.go:89] "registry-creds-764b6fb674-khbsn" [2604cd75-b452-415c-a35a-1f782ae157c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:32:09.874681  377634 system_pods.go:89] "registry-proxy-7n2kv" [03a65721-16eb-45cd-9dda-3c1a53f037f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:32:09.874686  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86qjb" [0ed800fc-182e-493f-aacb-7fb2337b1f90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:09.874722  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzrfv" [81da8183-509c-4ce8-89dd-c5890c19d49c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:09.874730  377634 system_pods.go:89] "storage-provisioner" [62bc9461-6c42-46bb-829b-86b3cbd143b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:32:09.874745  377634 retry.go:31] will retry after 426.570645ms: missing components: kube-dns
	I1207 22:32:10.104933  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:10.137912  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:10.305510  377634 system_pods.go:86] 20 kube-system pods found
	I1207 22:32:10.305542  377634 system_pods.go:89] "amd-gpu-device-plugin-vxzwd" [a0965c42-1d83-4d06-a17b-c97d11433f73] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:32:10.305551  377634 system_pods.go:89] "coredns-66bc5c9577-qv27l" [b54c5507-48ac-46cd-a418-ad889f0c4081] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:32:10.305558  377634 system_pods.go:89] "csi-hostpath-attacher-0" [ad3bde9a-0070-4465-afac-ee4c7efa4317] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:32:10.305563  377634 system_pods.go:89] "csi-hostpath-resizer-0" [a94e96e8-fca6-4a06-8765-13d2ef93a8da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:32:10.305585  377634 system_pods.go:89] "csi-hostpathplugin-rsn9j" [f690cddd-5270-404d-bd20-3beaf2f88e57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:32:10.305595  377634 system_pods.go:89] "etcd-addons-007439" [2632aa09-8415-49c7-a35f-956cd4d45c1a] Running
	I1207 22:32:10.305599  377634 system_pods.go:89] "kindnet-8t6g6" [2803a223-e9a2-4daa-9b70-2a094a8bfcf9] Running
	I1207 22:32:10.305603  377634 system_pods.go:89] "kube-apiserver-addons-007439" [33792473-a7d5-4e88-8255-f807cad0f40c] Running
	I1207 22:32:10.305610  377634 system_pods.go:89] "kube-controller-manager-addons-007439" [09d14a1d-243d-474b-a662-3178f0387ad2] Running
	I1207 22:32:10.305616  377634 system_pods.go:89] "kube-ingress-dns-minikube" [b977131b-fcf0-4612-af35-8b9e761cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:32:10.305619  377634 system_pods.go:89] "kube-proxy-724nh" [d18f83a2-2b43-4c5b-b426-35611913b0fe] Running
	I1207 22:32:10.305623  377634 system_pods.go:89] "kube-scheduler-addons-007439" [ff7ef5b1-4778-4a9e-8e65-25eb520a768d] Running
	I1207 22:32:10.305631  377634 system_pods.go:89] "metrics-server-85b7d694d7-xclmv" [127d9996-068b-4d23-8215-895807a75987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:32:10.305636  377634 system_pods.go:89] "nvidia-device-plugin-daemonset-ntq2v" [0b9135a9-fc92-4f62-9720-da68d90dcd42] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:32:10.305641  377634 system_pods.go:89] "registry-6b586f9694-6p6bs" [0a78b775-c4c2-437d-8f4d-28a45858c31d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:32:10.305647  377634 system_pods.go:89] "registry-creds-764b6fb674-khbsn" [2604cd75-b452-415c-a35a-1f782ae157c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:32:10.305653  377634 system_pods.go:89] "registry-proxy-7n2kv" [03a65721-16eb-45cd-9dda-3c1a53f037f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:32:10.305664  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86qjb" [0ed800fc-182e-493f-aacb-7fb2337b1f90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:10.305670  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzrfv" [81da8183-509c-4ce8-89dd-c5890c19d49c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:10.305681  377634 system_pods.go:89] "storage-provisioner" [62bc9461-6c42-46bb-829b-86b3cbd143b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:32:10.305709  377634 retry.go:31] will retry after 579.48594ms: missing components: kube-dns
	I1207 22:32:10.334643  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:10.354188  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:10.605773  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:10.638018  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:10.835343  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:10.848131  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:10.889933  377634 system_pods.go:86] 20 kube-system pods found
	I1207 22:32:10.889966  377634 system_pods.go:89] "amd-gpu-device-plugin-vxzwd" [a0965c42-1d83-4d06-a17b-c97d11433f73] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:32:10.889974  377634 system_pods.go:89] "coredns-66bc5c9577-qv27l" [b54c5507-48ac-46cd-a418-ad889f0c4081] Running
	I1207 22:32:10.889986  377634 system_pods.go:89] "csi-hostpath-attacher-0" [ad3bde9a-0070-4465-afac-ee4c7efa4317] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:32:10.889995  377634 system_pods.go:89] "csi-hostpath-resizer-0" [a94e96e8-fca6-4a06-8765-13d2ef93a8da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:32:10.890003  377634 system_pods.go:89] "csi-hostpathplugin-rsn9j" [f690cddd-5270-404d-bd20-3beaf2f88e57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:32:10.890013  377634 system_pods.go:89] "etcd-addons-007439" [2632aa09-8415-49c7-a35f-956cd4d45c1a] Running
	I1207 22:32:10.890019  377634 system_pods.go:89] "kindnet-8t6g6" [2803a223-e9a2-4daa-9b70-2a094a8bfcf9] Running
	I1207 22:32:10.890029  377634 system_pods.go:89] "kube-apiserver-addons-007439" [33792473-a7d5-4e88-8255-f807cad0f40c] Running
	I1207 22:32:10.890035  377634 system_pods.go:89] "kube-controller-manager-addons-007439" [09d14a1d-243d-474b-a662-3178f0387ad2] Running
	I1207 22:32:10.890048  377634 system_pods.go:89] "kube-ingress-dns-minikube" [b977131b-fcf0-4612-af35-8b9e761cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:32:10.890053  377634 system_pods.go:89] "kube-proxy-724nh" [d18f83a2-2b43-4c5b-b426-35611913b0fe] Running
	I1207 22:32:10.890061  377634 system_pods.go:89] "kube-scheduler-addons-007439" [ff7ef5b1-4778-4a9e-8e65-25eb520a768d] Running
	I1207 22:32:10.890070  377634 system_pods.go:89] "metrics-server-85b7d694d7-xclmv" [127d9996-068b-4d23-8215-895807a75987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:32:10.890080  377634 system_pods.go:89] "nvidia-device-plugin-daemonset-ntq2v" [0b9135a9-fc92-4f62-9720-da68d90dcd42] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:32:10.890092  377634 system_pods.go:89] "registry-6b586f9694-6p6bs" [0a78b775-c4c2-437d-8f4d-28a45858c31d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:32:10.890102  377634 system_pods.go:89] "registry-creds-764b6fb674-khbsn" [2604cd75-b452-415c-a35a-1f782ae157c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:32:10.890113  377634 system_pods.go:89] "registry-proxy-7n2kv" [03a65721-16eb-45cd-9dda-3c1a53f037f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:32:10.890121  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86qjb" [0ed800fc-182e-493f-aacb-7fb2337b1f90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:10.890133  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzrfv" [81da8183-509c-4ce8-89dd-c5890c19d49c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:10.890139  377634 system_pods.go:89] "storage-provisioner" [62bc9461-6c42-46bb-829b-86b3cbd143b2] Running
	I1207 22:32:10.890153  377634 system_pods.go:126] duration metric: took 1.915754378s to wait for k8s-apps to be running ...
	I1207 22:32:10.890164  377634 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 22:32:10.890219  377634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 22:32:10.906540  377634 system_svc.go:56] duration metric: took 16.366971ms WaitForService to wait for kubelet
	I1207 22:32:10.906572  377634 kubeadm.go:587] duration metric: took 15.04691413s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 22:32:10.906604  377634 node_conditions.go:102] verifying NodePressure condition ...
	I1207 22:32:10.909678  377634 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 22:32:10.909724  377634 node_conditions.go:123] node cpu capacity is 8
	I1207 22:32:10.909744  377634 node_conditions.go:105] duration metric: took 3.134337ms to run NodePressure ...
	I1207 22:32:10.909760  377634 start.go:242] waiting for startup goroutines ...
	I1207 22:32:11.104911  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:11.138422  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:11.334413  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:11.348242  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:11.605150  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:11.638138  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:11.835620  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:11.848219  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:12.207206  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:12.207304  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:12.334989  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:12.351263  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:12.605494  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:12.638752  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:12.834821  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:12.935411  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:13.104955  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:13.138457  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:13.334045  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:13.347712  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:13.686513  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:13.686585  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:13.834664  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:13.848421  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:14.104990  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:14.143039  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:14.335004  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:14.347405  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:14.605743  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:14.706289  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:14.834854  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:14.846912  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:15.104600  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:15.138517  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:15.334133  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:15.347520  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:15.604910  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:15.638165  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:15.834907  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:15.847728  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:16.104782  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:16.137819  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:16.334412  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:16.347552  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:16.605063  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:16.638179  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:16.834799  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:16.848099  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:17.104869  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:17.138465  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:17.334357  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:17.348035  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:17.605579  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:17.639052  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:17.835125  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:17.847679  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:18.104372  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:18.138681  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:18.334670  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:18.348274  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:18.605466  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:18.638821  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:18.835018  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:18.847719  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:19.104506  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:19.138933  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:19.334728  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:19.348285  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:19.605597  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:19.637585  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:19.834517  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:19.848121  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:20.105267  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:20.138341  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:20.333981  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:20.347558  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:20.605240  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:20.638429  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:20.834400  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:20.848061  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:21.104969  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:21.138290  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:21.335222  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:21.348026  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:21.605182  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:21.705680  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:21.834775  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:21.848459  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:22.105829  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:22.137835  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:22.334321  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:22.347720  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:22.604431  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:22.638791  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:22.834496  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:22.847801  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:23.104618  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:23.139084  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:23.335316  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:23.348100  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:23.604420  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:23.641685  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:23.834195  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:23.848319  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:24.104869  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:24.138335  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:24.376588  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:24.376588  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:24.611988  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:24.638784  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:24.834270  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:24.847811  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:25.104511  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:25.138951  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:25.334711  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:25.348560  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:25.604539  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:25.638444  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:25.833886  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:25.847136  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:26.105172  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:26.138554  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:26.334116  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:26.347735  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:26.604304  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:26.638469  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:26.833872  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:26.847202  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:27.105095  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:27.138181  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:27.334918  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:27.347453  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:27.605503  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:27.639257  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:27.835393  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:27.848161  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:28.105234  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:28.138189  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:28.334409  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:28.348025  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:28.605045  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:28.638495  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:28.834500  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:28.847989  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:29.104978  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:29.138132  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:29.334524  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:29.348120  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:29.605256  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:29.638259  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:29.833943  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:29.847203  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:30.105674  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:30.138188  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:30.335315  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:30.347465  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:30.605765  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:30.638181  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:30.835279  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:30.847367  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:31.105066  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:31.138006  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:31.334639  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:31.348255  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:31.606744  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:31.707203  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:31.835137  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:31.847547  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:32.104291  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:32.138363  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:32.333673  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:32.347981  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:32.604665  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:32.637822  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:32.834587  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:32.848059  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:33.104897  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:33.137986  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:33.334726  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:33.348161  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:33.605896  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:33.707355  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:33.834376  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:33.847713  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:34.104872  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:34.138548  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:34.334392  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:34.347734  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:34.617284  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:34.641765  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:34.834059  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:34.847418  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:35.104948  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:35.138339  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:35.334159  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:35.347775  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:35.604463  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:35.704951  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:35.834251  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:35.847899  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:36.104745  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:36.137687  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:36.334472  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:36.348025  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:36.605335  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:36.639444  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:36.835541  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:36.935810  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:37.104098  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:37.137960  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:37.334806  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:37.347343  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:37.605444  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:37.686742  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:37.834649  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:37.848212  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:38.105060  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:38.138286  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:38.334485  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:38.348290  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:38.605560  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:38.681296  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:38.833941  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:38.847211  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:39.104988  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:39.138217  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:39.334785  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:39.347047  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:39.604872  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:39.638068  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:39.834681  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:39.847860  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:40.104799  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:40.137959  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:40.334828  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:40.346954  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:40.604781  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:40.638089  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:40.834954  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:40.847467  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:41.105896  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:41.138643  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:41.334302  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:41.348161  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:41.605223  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:41.639203  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:41.834979  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:41.847725  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:42.105295  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:42.206084  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:42.334497  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:42.347813  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:42.655138  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:42.655138  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:42.834478  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:42.847891  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:43.104710  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:43.137780  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:43.334279  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:43.347733  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:43.604718  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:43.638077  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:43.835160  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:43.847428  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:44.105516  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:44.138837  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:44.334338  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:44.347679  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:44.605390  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:44.638634  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:44.834216  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:44.847924  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:45.105186  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:45.205235  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:45.334814  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:45.347845  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:45.647109  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:45.647156  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:45.835053  377634 kapi.go:107] duration metric: took 48.003727772s to wait for kubernetes.io/minikube-addons=registry ...
	I1207 22:32:45.847133  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:46.105297  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:46.138584  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:46.347997  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:46.605087  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:46.638190  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:46.848565  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:47.104823  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:47.138233  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:47.348486  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:47.604416  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:47.705149  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:47.848098  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:48.105652  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:48.138017  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:48.348205  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:48.604639  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:48.637686  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:48.848128  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:49.104946  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:49.138422  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:49.348907  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:49.604924  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:49.638486  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:49.849089  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:50.105263  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:50.138751  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:50.348212  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:50.604836  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:50.637755  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:50.847880  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:51.111472  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:51.138593  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:51.348513  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:51.605082  377634 kapi.go:107] duration metric: took 54.003598032s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1207 22:32:51.638597  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:51.848094  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:52.151201  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:52.348522  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:52.680794  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:52.928244  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:53.138820  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:53.347372  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:53.639403  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:53.850421  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:54.138666  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:54.347558  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:54.638591  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:54.847800  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:55.138383  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:55.348394  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:55.639002  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:55.848630  377634 kapi.go:107] duration metric: took 57.503979448s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1207 22:32:56.138538  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:56.637933  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:57.137939  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:57.638818  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:58.138824  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:58.638943  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:59.138925  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:59.638398  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:00.138445  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:00.638311  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:01.138672  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:01.638823  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:02.139464  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:02.639133  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:03.138367  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:03.701209  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:04.139061  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:04.638295  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:05.138216  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:05.638497  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:06.138823  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:06.639021  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:07.138428  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:07.638413  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:08.138802  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:08.638667  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:09.138899  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:09.638212  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:10.138617  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:10.638874  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:11.138328  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:11.638414  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:12.138926  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:12.638982  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:13.138559  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:13.638433  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:14.138780  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:14.639254  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:15.138362  377634 kapi.go:107] duration metric: took 1m11.002943391s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1207 22:33:15.139826  377634 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-007439 cluster.
	I1207 22:33:15.141075  377634 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1207 22:33:15.142206  377634 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1207 22:33:15.143317  377634 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner-rancher, inspektor-gadget, yakd, volcano, metrics-server, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1207 22:33:15.144373  377634 addons.go:530] duration metric: took 1m19.284711217s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner-rancher inspektor-gadget yakd volcano metrics-server default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1207 22:33:15.144415  377634 start.go:247] waiting for cluster config update ...
	I1207 22:33:15.144443  377634 start.go:256] writing updated cluster config ...
	I1207 22:33:15.144758  377634 ssh_runner.go:195] Run: rm -f paused
	I1207 22:33:15.148918  377634 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 22:33:15.151546  377634 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qv27l" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.155282  377634 pod_ready.go:94] pod "coredns-66bc5c9577-qv27l" is "Ready"
	I1207 22:33:15.155302  377634 pod_ready.go:86] duration metric: took 3.734603ms for pod "coredns-66bc5c9577-qv27l" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.157023  377634 pod_ready.go:83] waiting for pod "etcd-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.160326  377634 pod_ready.go:94] pod "etcd-addons-007439" is "Ready"
	I1207 22:33:15.160345  377634 pod_ready.go:86] duration metric: took 3.301966ms for pod "etcd-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.162089  377634 pod_ready.go:83] waiting for pod "kube-apiserver-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.165157  377634 pod_ready.go:94] pod "kube-apiserver-addons-007439" is "Ready"
	I1207 22:33:15.165177  377634 pod_ready.go:86] duration metric: took 3.071138ms for pod "kube-apiserver-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.166767  377634 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.551957  377634 pod_ready.go:94] pod "kube-controller-manager-addons-007439" is "Ready"
	I1207 22:33:15.551982  377634 pod_ready.go:86] duration metric: took 385.197709ms for pod "kube-controller-manager-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.752958  377634 pod_ready.go:83] waiting for pod "kube-proxy-724nh" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:16.152105  377634 pod_ready.go:94] pod "kube-proxy-724nh" is "Ready"
	I1207 22:33:16.152131  377634 pod_ready.go:86] duration metric: took 399.134038ms for pod "kube-proxy-724nh" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:16.353056  377634 pod_ready.go:83] waiting for pod "kube-scheduler-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:16.752535  377634 pod_ready.go:94] pod "kube-scheduler-addons-007439" is "Ready"
	I1207 22:33:16.752566  377634 pod_ready.go:86] duration metric: took 399.485526ms for pod "kube-scheduler-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:16.752588  377634 pod_ready.go:40] duration metric: took 1.603635875s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 22:33:16.797666  377634 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 22:33:16.799542  377634 out.go:179] * Done! kubectl is now configured to use "addons-007439" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	c4e503e069f86       56cc512116c8f       8 minutes ago       Running             busybox                   0                   7fc15941c9d4c       busybox                                    default
	49d2659e5d005       97fe896f8c07b       9 minutes ago       Running             controller                0                   4c70392eaf980       ingress-nginx-controller-6c8bf45fb-j9c9h   ingress-nginx
	64e04f3b48586       884bd0ac01c8f       9 minutes ago       Exited              patch                     0                   2494dcf80b409       ingress-nginx-admission-patch-ktwnj        ingress-nginx
	a875b450d5fed       884bd0ac01c8f       10 minutes ago      Exited              create                    0                   6d29205806d41       ingress-nginx-admission-create-mx7h5       ingress-nginx
	e4a3f9148c61d       b6ab53fbfedaa       10 minutes ago      Running             minikube-ingress-dns      0                   57857a49054ef       kube-ingress-dns-minikube                  kube-system
	8c0d12e2b6b9f       6e38f40d628db       10 minutes ago      Running             storage-provisioner       0                   1866e052153ea       storage-provisioner                        kube-system
	75238e6b95b61       52546a367cc9e       10 minutes ago      Running             coredns                   0                   daccf049c651e       coredns-66bc5c9577-qv27l                   kube-system
	4022c9fb1481a       409467f978b4a       10 minutes ago      Running             kindnet-cni               0                   8c40702c56ffb       kindnet-8t6g6                              kube-system
	7ec22013ddab0       8aa150647e88a       10 minutes ago      Running             kube-proxy                0                   50b8b1f1714e5       kube-proxy-724nh                           kube-system
	07c7fe1323cf0       a5f569d49a979       10 minutes ago      Running             kube-apiserver            0                   5178bcc811637       kube-apiserver-addons-007439               kube-system
	998061cc2a197       88320b5498ff2       10 minutes ago      Running             kube-scheduler            0                   66accb717e258       kube-scheduler-addons-007439               kube-system
	a0a3f7747e6dd       01e8bacf0f500       10 minutes ago      Running             kube-controller-manager   0                   66a03eb47796d       kube-controller-manager-addons-007439      kube-system
	2d0f96ec7aaa0       a3e246e9556e9       10 minutes ago      Running             etcd                      0                   106ef906bff2c       etcd-addons-007439                         kube-system
	
	
	==> containerd <==
	Dec 07 22:42:23 addons-007439 containerd[659]: time="2025-12-07T22:42:23.065396615Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60f2c93c3bef251b22011ee6a4bd35d9.slice/cri-containerd-2d0f96ec7aaa01a68daa93e9ce562fd5c87edd0e56b4f37f2768785d23db1871.scope/hugetlb.1GB.events\""
	Dec 07 22:42:23 addons-007439 containerd[659]: time="2025-12-07T22:42:23.066087581Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeca69fa_83a8_43b2_9cc1_ceabbe875006.slice/cri-containerd-49d2659e5d0050668b505655d28976e9b38bc570cc77d174b4f8f6c1444f5537.scope/hugetlb.2MB.events\""
	Dec 07 22:42:23 addons-007439 containerd[659]: time="2025-12-07T22:42:23.066180676Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeca69fa_83a8_43b2_9cc1_ceabbe875006.slice/cri-containerd-49d2659e5d0050668b505655d28976e9b38bc570cc77d174b4f8f6c1444f5537.scope/hugetlb.1GB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.079967504Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb977131b_fcf0_4612_af35_8b9e761cbebc.slice/cri-containerd-e4a3f9148c61df7ea7cb2808218f1316d162b66846c55ef9b8af7e33c9cbefab.scope/hugetlb.2MB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.080113571Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb977131b_fcf0_4612_af35_8b9e761cbebc.slice/cri-containerd-e4a3f9148c61df7ea7cb2808218f1316d162b66846c55ef9b8af7e33c9cbefab.scope/hugetlb.1GB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.080891648Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb54c5507_48ac_46cd_a418_ad889f0c4081.slice/cri-containerd-75238e6b95b61458a2b61f6a22ba8094e873f25765c0904f4697f54b5321cd61.scope/hugetlb.2MB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.081104552Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb54c5507_48ac_46cd_a418_ad889f0c4081.slice/cri-containerd-75238e6b95b61458a2b61f6a22ba8094e873f25765c0904f4697f54b5321cd61.scope/hugetlb.1GB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.082047585Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60f2c93c3bef251b22011ee6a4bd35d9.slice/cri-containerd-2d0f96ec7aaa01a68daa93e9ce562fd5c87edd0e56b4f37f2768785d23db1871.scope/hugetlb.2MB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.082156820Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60f2c93c3bef251b22011ee6a4bd35d9.slice/cri-containerd-2d0f96ec7aaa01a68daa93e9ce562fd5c87edd0e56b4f37f2768785d23db1871.scope/hugetlb.1GB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.083079512Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeca69fa_83a8_43b2_9cc1_ceabbe875006.slice/cri-containerd-49d2659e5d0050668b505655d28976e9b38bc570cc77d174b4f8f6c1444f5537.scope/hugetlb.2MB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.083218244Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeca69fa_83a8_43b2_9cc1_ceabbe875006.slice/cri-containerd-49d2659e5d0050668b505655d28976e9b38bc570cc77d174b4f8f6c1444f5537.scope/hugetlb.1GB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.084096808Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdaf9cdc_04a9_4afe_b740_3aad3a1761a7.slice/cri-containerd-c4e503e069f86c607b9bf74f11a48dba95c504950ea6be7818fd7106665a3df3.scope/hugetlb.2MB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.084209198Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdaf9cdc_04a9_4afe_b740_3aad3a1761a7.slice/cri-containerd-c4e503e069f86c607b9bf74f11a48dba95c504950ea6be7818fd7106665a3df3.scope/hugetlb.1GB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.085156138Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d256565b5f8e6d8be70c2986cdb6e86.slice/cri-containerd-07c7fe1323cf055bbbf9b90ad55b35069663427c3e640475467f69b7d9f71828.scope/hugetlb.2MB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.085269483Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d256565b5f8e6d8be70c2986cdb6e86.slice/cri-containerd-07c7fe1323cf055bbbf9b90ad55b35069663427c3e640475467f69b7d9f71828.scope/hugetlb.1GB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.086017800Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod2803a223_e9a2_4daa_9b70_2a094a8bfcf9.slice/cri-containerd-4022c9fb1481ae7b0a871bde371a54bebba3d28dfbb30934d428b3720ab39fe2.scope/hugetlb.2MB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.086133690Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod2803a223_e9a2_4daa_9b70_2a094a8bfcf9.slice/cri-containerd-4022c9fb1481ae7b0a871bde371a54bebba3d28dfbb30934d428b3720ab39fe2.scope/hugetlb.1GB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.087082439Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e84b285003d418b18a2ad2e58cc0f60.slice/cri-containerd-a0a3f7747e6dde69a78a18a0ec96bc7cfe02e3f2db746ef7631aac0314553e12.scope/hugetlb.2MB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.087190893Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e84b285003d418b18a2ad2e58cc0f60.slice/cri-containerd-a0a3f7747e6dde69a78a18a0ec96bc7cfe02e3f2db746ef7631aac0314553e12.scope/hugetlb.1GB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.087869087Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23475c152526c71f2fc12d9ccd9d08da.slice/cri-containerd-998061cc2a197b456e37469c981d6d7e6eccb94dcca2b9ce4bd676c5e41be358.scope/hugetlb.2MB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.087953058Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23475c152526c71f2fc12d9ccd9d08da.slice/cri-containerd-998061cc2a197b456e37469c981d6d7e6eccb94dcca2b9ce4bd676c5e41be358.scope/hugetlb.1GB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.088796802Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd18f83a2_2b43_4c5b_b426_35611913b0fe.slice/cri-containerd-7ec22013ddab02873bd1a7721d350a91368b4489a4fd97bffed120183b9d6f5d.scope/hugetlb.2MB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.088917123Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd18f83a2_2b43_4c5b_b426_35611913b0fe.slice/cri-containerd-7ec22013ddab02873bd1a7721d350a91368b4489a4fd97bffed120183b9d6f5d.scope/hugetlb.1GB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.089606356Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62bc9461_6c42_46bb_829b_86b3cbd143b2.slice/cri-containerd-8c0d12e2b6b9f8c7a71e471624fccf0060cb79ad1cf1e51555cfaeb9771ef31b.scope/hugetlb.2MB.events\""
	Dec 07 22:42:33 addons-007439 containerd[659]: time="2025-12-07T22:42:33.089745081Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62bc9461_6c42_46bb_829b_86b3cbd143b2.slice/cri-containerd-8c0d12e2b6b9f8c7a71e471624fccf0060cb79ad1cf1e51555cfaeb9771ef31b.scope/hugetlb.1GB.events\""
	
	
	==> coredns [75238e6b95b61458a2b61f6a22ba8094e873f25765c0904f4697f54b5321cd61] <==
	[INFO] 10.244.0.22:36522 - 2164 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000176689s
	[INFO] 10.244.0.22:35062 - 24180 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.0000911s
	[INFO] 10.244.0.22:35062 - 23904 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000125392s
	[INFO] 10.244.0.22:51383 - 52329 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000073453s
	[INFO] 10.244.0.22:51383 - 52564 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000123082s
	[INFO] 10.244.0.22:56036 - 15109 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000066545s
	[INFO] 10.244.0.22:56036 - 14939 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00010264s
	[INFO] 10.244.0.22:53993 - 64779 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009872s
	[INFO] 10.244.0.22:53993 - 65251 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000144223s
	[INFO] 10.244.0.26:54923 - 59053 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000234195s
	[INFO] 10.244.0.26:46003 - 1011 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000329162s
	[INFO] 10.244.0.26:47520 - 46082 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155183s
	[INFO] 10.244.0.26:37977 - 4920 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000239923s
	[INFO] 10.244.0.26:40242 - 4515 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134789s
	[INFO] 10.244.0.26:50882 - 20631 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000146945s
	[INFO] 10.244.0.26:47109 - 25839 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.007579691s
	[INFO] 10.244.0.26:35748 - 58037 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.010934137s
	[INFO] 10.244.0.26:59910 - 10885 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007617469s
	[INFO] 10.244.0.26:52765 - 51850 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007947021s
	[INFO] 10.244.0.26:59171 - 62450 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00512494s
	[INFO] 10.244.0.26:58580 - 6517 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005411444s
	[INFO] 10.244.0.26:56394 - 28475 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000818018s
	[INFO] 10.244.0.26:47777 - 62109 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001117278s
	[INFO] 10.244.0.31:33017 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00021271s
	[INFO] 10.244.0.31:60903 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000186967s
	
	
	==> describe nodes <==
	Name:               addons-007439
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-007439
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=addons-007439
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T22_31_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-007439
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 22:31:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-007439
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 22:42:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 22:41:34 +0000   Sun, 07 Dec 2025 22:31:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 22:41:34 +0000   Sun, 07 Dec 2025 22:31:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 22:41:34 +0000   Sun, 07 Dec 2025 22:31:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 22:41:34 +0000   Sun, 07 Dec 2025 22:32:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-007439
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                9cbd7bd8-6368-4564-9823-8547ed273796
	  Boot ID:                    60b18ead-4d86-44c7-8f9b-425eebb82278
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-j9c9h    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-qv27l                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-addons-007439                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-8t6g6                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-addons-007439                250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-007439       200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-724nh                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-007439                100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-007439 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-007439 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-007439 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m   node-controller  Node addons-007439 event: Registered Node addons-007439 in Controller
	  Normal  NodeReady                10m   kubelet          Node addons-007439 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 21:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 9d 47 68 73 f8 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 51 85 40 99 2c 08 06
	[Dec 7 22:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +0.003775] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 fe e4 4c 66 38 08 06
	[ +24.208165] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	[Dec 7 22:26] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 26 10 e0 03 10 08 06
	[  +0.000351] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +6.093825] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 95 3b ef b3 e0 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e ac 25 ad 6e 00 08 06
	[ +21.488023] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e e7 41 9a 21 38 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	
	
	==> etcd [2d0f96ec7aaa01a68daa93e9ce562fd5c87edd0e56b4f37f2768785d23db1871] <==
	{"level":"warn","ts":"2025-12-07T22:32:12.205493Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.645463ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-07T22:32:12.205606Z","caller":"traceutil/trace.go:172","msg":"trace[1470452276] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1067; }","duration":"101.772953ms","start":"2025-12-07T22:32:12.103816Z","end":"2025-12-07T22:32:12.205589Z","steps":["trace[1470452276] 'range keys from in-memory index tree'  (duration: 101.569673ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-07T22:32:23.585959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:23.613744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:23.657598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:23.666262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:23.676407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:23.684744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:23.730367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:25.848312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:25.854650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:25.867091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:25.876963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:25.883364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57722","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T22:32:34.615525Z","caller":"traceutil/trace.go:172","msg":"trace[1560222383] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"167.825336ms","start":"2025-12-07T22:32:34.447682Z","end":"2025-12-07T22:32:34.615507Z","steps":["trace[1560222383] 'process raft request'  (duration: 167.7316ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-07T22:32:34.785718Z","caller":"traceutil/trace.go:172","msg":"trace[142784423] transaction","detail":"{read_only:false; response_revision:1182; number_of_response:1; }","duration":"140.107217ms","start":"2025-12-07T22:32:34.645567Z","end":"2025-12-07T22:32:34.785675Z","steps":["trace[142784423] 'process raft request'  (duration: 87.025087ms)","trace[142784423] 'compare'  (duration: 52.9808ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-07T22:32:52.926412Z","caller":"traceutil/trace.go:172","msg":"trace[1444399122] transaction","detail":"{read_only:false; response_revision:1348; number_of_response:1; }","duration":"116.189643ms","start":"2025-12-07T22:32:52.810208Z","end":"2025-12-07T22:32:52.926397Z","steps":["trace[1444399122] 'process raft request'  (duration: 116.110534ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-07T22:33:45.055829Z","caller":"traceutil/trace.go:172","msg":"trace[1004819428] linearizableReadLoop","detail":"{readStateIndex:1575; appliedIndex:1575; }","duration":"102.223648ms","start":"2025-12-07T22:33:44.953581Z","end":"2025-12-07T22:33:45.055805Z","steps":["trace[1004819428] 'read index received'  (duration: 102.21564ms)","trace[1004819428] 'applied index is now lower than readState.Index'  (duration: 6.583µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-07T22:33:45.119493Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.88462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/volcano-system/volcano-controller-configmap\" limit:1 ","response":"range_response_count:1 size:696"}
	{"level":"info","ts":"2025-12-07T22:33:45.119568Z","caller":"traceutil/trace.go:172","msg":"trace[444483577] range","detail":"{range_begin:/registry/configmaps/volcano-system/volcano-controller-configmap; range_end:; response_count:1; response_revision:1534; }","duration":"165.973554ms","start":"2025-12-07T22:33:44.953578Z","end":"2025-12-07T22:33:45.119551Z","steps":["trace[444483577] 'agreement among raft nodes before linearized reading'  (duration: 102.325231ms)","trace[444483577] 'range keys from in-memory index tree'  (duration: 63.472647ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-07T22:33:45.119626Z","caller":"traceutil/trace.go:172","msg":"trace[510443830] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1535; }","duration":"177.968009ms","start":"2025-12-07T22:33:44.941644Z","end":"2025-12-07T22:33:45.119612Z","steps":["trace[510443830] 'process raft request'  (duration: 114.240627ms)","trace[510443830] 'compare'  (duration: 63.519906ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-07T22:33:45.119749Z","caller":"traceutil/trace.go:172","msg":"trace[510971700] transaction","detail":"{read_only:false; response_revision:1536; number_of_response:1; }","duration":"138.551277ms","start":"2025-12-07T22:33:44.981181Z","end":"2025-12-07T22:33:45.119732Z","steps":["trace[510971700] 'process raft request'  (duration: 138.362578ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-07T22:41:47.943995Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2467}
	{"level":"info","ts":"2025-12-07T22:41:48.196079Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2467,"took":"251.348013ms","hash":3617189837,"current-db-size-bytes":10534912,"current-db-size":"10 MB","current-db-size-in-use-bytes":2957312,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-12-07T22:41:48.196119Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3617189837,"revision":2467,"compact-revision":-1}
	
	
	==> kernel <==
	 22:42:33 up  1:25,  0 user,  load average: 0.18, 0.49, 1.21
	Linux addons-007439 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4022c9fb1481ae7b0a871bde371a54bebba3d28dfbb30934d428b3720ab39fe2] <==
	I1207 22:40:28.431673       1 main.go:301] handling current node
	I1207 22:40:38.425159       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:40:38.425189       1 main.go:301] handling current node
	I1207 22:40:48.425140       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:40:48.425186       1 main.go:301] handling current node
	I1207 22:40:58.429818       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:40:58.429848       1 main.go:301] handling current node
	I1207 22:41:08.425735       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:41:08.425766       1 main.go:301] handling current node
	I1207 22:41:18.424287       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:41:18.424320       1 main.go:301] handling current node
	I1207 22:41:28.430392       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:41:28.430423       1 main.go:301] handling current node
	I1207 22:41:38.425185       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:41:38.425243       1 main.go:301] handling current node
	I1207 22:41:48.431623       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:41:48.431667       1 main.go:301] handling current node
	I1207 22:41:58.424389       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:41:58.424423       1 main.go:301] handling current node
	I1207 22:42:08.425729       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:42:08.425760       1 main.go:301] handling current node
	I1207 22:42:18.431161       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:42:18.431205       1 main.go:301] handling current node
	I1207 22:42:28.431414       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:42:28.431448       1 main.go:301] handling current node
	
	
	==> kube-apiserver [07c7fe1323cf055bbbf9b90ad55b35069663427c3e640475467f69b7d9f71828] <==
	W1207 22:33:46.448809       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1207 22:33:46.666103       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1207 22:33:46.754795       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1207 22:34:03.231581       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41204: use of closed network connection
	E1207 22:34:03.391462       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41230: use of closed network connection
	I1207 22:34:12.984087       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.83.62"}
	I1207 22:34:31.644379       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1207 22:34:31.806141       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.56.197"}
	I1207 22:34:34.111205       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1207 22:35:01.906207       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 22:35:01.906259       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 22:35:01.919214       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 22:35:01.919255       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 22:35:01.922666       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 22:35:01.922723       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 22:35:01.941614       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 22:35:01.941659       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 22:35:01.945894       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 22:35:01.945933       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E1207 22:35:02.850032       1 watch.go:272] "Unhandled Error" err="client disconnected" logger="UnhandledError"
	W1207 22:35:02.920200       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1207 22:35:02.946570       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1207 22:35:03.064422       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1207 22:35:20.432209       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1207 22:41:48.826157       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [a0a3f7747e6dde69a78a18a0ec96bc7cfe02e3f2db746ef7631aac0314553e12] <==
	E1207 22:41:55.019551       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:41:55.020646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:41:55.804935       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1207 22:41:56.768903       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:41:56.769870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:42:02.672534       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:42:02.673720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:42:05.683253       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:42:05.684208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:42:05.940297       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:42:05.941374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:42:06.270869       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:42:06.271746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:42:09.283761       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:42:09.284670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:42:10.805586       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1207 22:42:11.438248       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:42:11.439375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:42:17.722181       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:42:17.723151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:42:25.806409       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1207 22:42:28.242427       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:42:28.243333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:42:31.754823       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:42:31.755759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [7ec22013ddab02873bd1a7721d350a91368b4489a4fd97bffed120183b9d6f5d] <==
	I1207 22:31:57.619006       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:31:57.679727       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 22:31:57.781825       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 22:31:57.781884       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:31:57.781994       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:31:57.830496       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:31:57.830682       1 server_linux.go:132] "Using iptables Proxier"
	I1207 22:31:57.837209       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:31:57.838056       1 server.go:527] "Version info" version="v1.34.2"
	I1207 22:31:57.838194       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:31:57.840072       1 config.go:200] "Starting service config controller"
	I1207 22:31:57.841722       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:31:57.840615       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:31:57.842114       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:31:57.841220       1 config.go:309] "Starting node config controller"
	I1207 22:31:57.842284       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:31:57.846848       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:31:57.840603       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:31:57.847676       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:31:57.942507       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 22:31:57.942563       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:31:57.948332       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [998061cc2a197b456e37469c981d6d7e6eccb94dcca2b9ce4bd676c5e41be358] <==
	E1207 22:31:48.839854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 22:31:48.839940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:31:48.840009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:31:48.840037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 22:31:48.840046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:31:48.840117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1207 22:31:48.840427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1207 22:31:48.840758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1207 22:31:48.841266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 22:31:48.841325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1207 22:31:48.841335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1207 22:31:48.841329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 22:31:48.841366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 22:31:48.841372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:31:48.841397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:31:48.841541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:31:48.841541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:31:49.663764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1207 22:31:49.687748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 22:31:49.687906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:31:49.725222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1207 22:31:49.794614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:31:49.876298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:31:49.909380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1207 22:31:50.338448       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 07 22:40:19 addons-007439 kubelet[1406]:         rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	Dec 07 22:40:19 addons-007439 kubelet[1406]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 22:40:19 addons-007439 kubelet[1406]:  > image="docker.io/nginx:alpine"
	Dec 07 22:40:19 addons-007439 kubelet[1406]: E1207 22:40:19.147162    1406 kuberuntime_image.go:43] "Failed to pull image" err=<
	Dec 07 22:40:19 addons-007439 kubelet[1406]:         failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	Dec 07 22:40:19 addons-007439 kubelet[1406]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 22:40:19 addons-007439 kubelet[1406]:  > image="docker.io/nginx:alpine"
	Dec 07 22:40:19 addons-007439 kubelet[1406]: E1207 22:40:19.147252    1406 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Dec 07 22:40:19 addons-007439 kubelet[1406]:         container nginx start failed in pod nginx_default(3c9264fd-2643-4039-b4ae-33eb195a30d1): ErrImagePull: failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	Dec 07 22:40:19 addons-007439 kubelet[1406]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 22:40:19 addons-007439 kubelet[1406]:  > logger="UnhandledError"
	Dec 07 22:40:19 addons-007439 kubelet[1406]: E1207 22:40:19.147280    1406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3c9264fd-2643-4039-b4ae-33eb195a30d1"
	Dec 07 22:40:31 addons-007439 kubelet[1406]: E1207 22:40:31.220007    1406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3c9264fd-2643-4039-b4ae-33eb195a30d1"
	Dec 07 22:40:46 addons-007439 kubelet[1406]: E1207 22:40:46.219845    1406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3c9264fd-2643-4039-b4ae-33eb195a30d1"
	Dec 07 22:41:01 addons-007439 kubelet[1406]: E1207 22:41:01.220210    1406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3c9264fd-2643-4039-b4ae-33eb195a30d1"
	Dec 07 22:41:09 addons-007439 kubelet[1406]: I1207 22:41:09.219513    1406 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 07 22:41:13 addons-007439 kubelet[1406]: E1207 22:41:13.219723    1406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3c9264fd-2643-4039-b4ae-33eb195a30d1"
	Dec 07 22:41:25 addons-007439 kubelet[1406]: I1207 22:41:25.219061    1406 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-qv27l" secret="" err="secret \"gcp-auth\" not found"
	Dec 07 22:41:28 addons-007439 kubelet[1406]: E1207 22:41:28.219816    1406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3c9264fd-2643-4039-b4ae-33eb195a30d1"
	Dec 07 22:41:41 addons-007439 kubelet[1406]: E1207 22:41:41.220207    1406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3c9264fd-2643-4039-b4ae-33eb195a30d1"
	Dec 07 22:41:52 addons-007439 kubelet[1406]: E1207 22:41:52.219836    1406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3c9264fd-2643-4039-b4ae-33eb195a30d1"
	Dec 07 22:42:03 addons-007439 kubelet[1406]: E1207 22:42:03.219354    1406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3c9264fd-2643-4039-b4ae-33eb195a30d1"
	Dec 07 22:42:16 addons-007439 kubelet[1406]: E1207 22:42:16.220152    1406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3c9264fd-2643-4039-b4ae-33eb195a30d1"
	Dec 07 22:42:31 addons-007439 kubelet[1406]: I1207 22:42:31.219774    1406 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 07 22:42:31 addons-007439 kubelet[1406]: E1207 22:42:31.220600    1406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3c9264fd-2643-4039-b4ae-33eb195a30d1"
	
	
	==> storage-provisioner [8c0d12e2b6b9f8c7a71e471624fccf0060cb79ad1cf1e51555cfaeb9771ef31b] <==
	W1207 22:42:07.583609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:09.586508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:09.591757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:11.594919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:11.598513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:13.601425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:13.605106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:15.607960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:15.612887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:17.615478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:17.619094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:19.621960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:19.625522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:21.628223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:21.631853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:23.634597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:23.639179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:25.642071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:25.645723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:27.648411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:27.653062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:29.656159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:29.660079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:31.663130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:31.666929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-007439 -n addons-007439
helpers_test.go:269: (dbg) Run:  kubectl --context addons-007439 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx test-local-path ingress-nginx-admission-create-mx7h5 ingress-nginx-admission-patch-ktwnj
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-007439 describe pod nginx test-local-path ingress-nginx-admission-create-mx7h5 ingress-nginx-admission-patch-ktwnj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-007439 describe pod nginx test-local-path ingress-nginx-admission-create-mx7h5 ingress-nginx-admission-patch-ktwnj: exit status 1 (70.333573ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-007439/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:34:31 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.33
	IPs:
	  IP:  10.244.0.33
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qg2pq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qg2pq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age    From               Message
	  ----     ------     ----   ----               -------
	  Normal   Scheduled  8m3s   default-scheduler  Successfully assigned default/nginx to addons-007439
	  Warning  Failed     7m44s  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  5m (x5 over 8m2s)     kubelet  Pulling image "docker.io/nginx:alpine"
	  Warning  Failed   4m59s (x4 over 8m1s)  kubelet  Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   4m59s (x5 over 8m1s)  kubelet  Error: ErrImagePull
	  Normal   BackOff  2m56s (x21 over 8m)   kubelet  Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed   2m56s (x21 over 8m)   kubelet  Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c72mg (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-c72mg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-mx7h5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ktwnj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-007439 describe pod nginx test-local-path ingress-nginx-admission-create-mx7h5 ingress-nginx-admission-patch-ktwnj: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-007439 addons disable ingress-dns --alsologtostderr -v=1: (1.152775929s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-007439 addons disable ingress --alsologtostderr -v=1: (7.630453002s)
--- FAIL: TestAddons/parallel/Ingress (491.47s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (344.79s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-007439 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-007439 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-007439 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (3.31µs)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-007439
helpers_test.go:243: (dbg) docker inspect addons-007439:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "95784a3a52e61063335d389754aa08b4d87bfedc08bf523428bfa3152dd5d7d7",
	        "Created": "2025-12-07T22:31:36.240822065Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 378269,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T22:31:36.269867677Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/95784a3a52e61063335d389754aa08b4d87bfedc08bf523428bfa3152dd5d7d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/95784a3a52e61063335d389754aa08b4d87bfedc08bf523428bfa3152dd5d7d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/95784a3a52e61063335d389754aa08b4d87bfedc08bf523428bfa3152dd5d7d7/hosts",
	        "LogPath": "/var/lib/docker/containers/95784a3a52e61063335d389754aa08b4d87bfedc08bf523428bfa3152dd5d7d7/95784a3a52e61063335d389754aa08b4d87bfedc08bf523428bfa3152dd5d7d7-json.log",
	        "Name": "/addons-007439",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-007439:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-007439",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "95784a3a52e61063335d389754aa08b4d87bfedc08bf523428bfa3152dd5d7d7",
	                "LowerDir": "/var/lib/docker/overlay2/80e9325d913045df6acf50f0e0f595785f44bf5cc35a62845dad774c1cd4c36c-init/diff:/var/lib/docker/overlay2/3cf0cc8fa8911fedbf601d360c4398569b0f30cd394076654dad88848c4dafaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/80e9325d913045df6acf50f0e0f595785f44bf5cc35a62845dad774c1cd4c36c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/80e9325d913045df6acf50f0e0f595785f44bf5cc35a62845dad774c1cd4c36c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/80e9325d913045df6acf50f0e0f595785f44bf5cc35a62845dad774c1cd4c36c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-007439",
	                "Source": "/var/lib/docker/volumes/addons-007439/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-007439",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-007439",
	                "name.minikube.sigs.k8s.io": "addons-007439",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f62c5320cdbde03cb54a53c02f096730436c8a76e0b78c5c13fc9cb071c192ec",
	            "SandboxKey": "/var/run/docker/netns/f62c5320cdbd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-007439": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "04fdabd91e38450778e4670e1c706e19a00f2f24341a0a6ec9a0300899913688",
	                    "EndpointID": "dd19f9f553ca01c1448abb8f35ca5d5acbe2ce55f612d24faadd69f4b35a5319",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "6a:9e:1f:f0:fc:e7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-007439",
	                        "95784a3a52e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-007439 -n addons-007439
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 logs -n 25
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                      ARGS                                                                                                                                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-899458                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-899458   │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │ 07 Dec 25 22:31 UTC │
	│ delete  │ -p download-only-009383                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-009383   │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │ 07 Dec 25 22:31 UTC │
	│ start   │ --download-only -p download-docker-804251 --alsologtostderr --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-804251 │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │                     │
	│ delete  │ -p download-docker-804251                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-docker-804251 │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │ 07 Dec 25 22:31 UTC │
	│ start   │ --download-only -p binary-mirror-557917 --alsologtostderr --binary-mirror http://127.0.0.1:44153 --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                               │ binary-mirror-557917   │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │                     │
	│ delete  │ -p binary-mirror-557917                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ binary-mirror-557917   │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │ 07 Dec 25 22:31 UTC │
	│ addons  │ enable dashboard -p addons-007439                                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │                     │
	│ addons  │ disable dashboard -p addons-007439                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │                     │
	│ start   │ -p addons-007439 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │ 07 Dec 25 22:33 UTC │
	│ addons  │ addons-007439 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:33 UTC │ 07 Dec 25 22:33 UTC │
	│ addons  │ addons-007439 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ enable headlamp -p addons-007439 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ ip      │ addons-007439 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-007439                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons  │ addons-007439 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:35 UTC │
	│ addons  │ addons-007439 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:35 UTC │ 07 Dec 25 22:35 UTC │
	│ addons  │ addons-007439 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-007439          │ jenkins │ v1.37.0 │ 07 Dec 25 22:35 UTC │ 07 Dec 25 22:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:31:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:31:14.046059  377634 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:31:14.046160  377634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:31:14.046171  377634 out.go:374] Setting ErrFile to fd 2...
	I1207 22:31:14.046177  377634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:31:14.046369  377634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 22:31:14.046902  377634 out.go:368] Setting JSON to false
	I1207 22:31:14.047862  377634 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4423,"bootTime":1765142251,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:31:14.047960  377634 start.go:143] virtualization: kvm guest
	I1207 22:31:14.049737  377634 out.go:179] * [addons-007439] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:31:14.050928  377634 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:31:14.050931  377634 notify.go:221] Checking for updates...
	I1207 22:31:14.052304  377634 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:31:14.053561  377634 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 22:31:14.054761  377634 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 22:31:14.055729  377634 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:31:14.056780  377634 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:31:14.057907  377634 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:31:14.081483  377634 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:31:14.081641  377634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:31:14.132808  377634 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-07 22:31:14.123546778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:31:14.132917  377634 docker.go:319] overlay module found
	I1207 22:31:14.134771  377634 out.go:179] * Using the docker driver based on user configuration
	I1207 22:31:14.135980  377634 start.go:309] selected driver: docker
	I1207 22:31:14.135998  377634 start.go:927] validating driver "docker" against <nil>
	I1207 22:31:14.136009  377634 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:31:14.136562  377634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:31:14.189081  377634 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-07 22:31:14.180290541 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:31:14.189236  377634 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 22:31:14.189474  377634 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 22:31:14.190977  377634 out.go:179] * Using Docker driver with root privileges
	I1207 22:31:14.191977  377634 cni.go:84] Creating CNI manager for ""
	I1207 22:31:14.192043  377634 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 22:31:14.192055  377634 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 22:31:14.192127  377634 start.go:353] cluster config:
	{Name:addons-007439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-007439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:31:14.193221  377634 out.go:179] * Starting "addons-007439" primary control-plane node in "addons-007439" cluster
	I1207 22:31:14.194141  377634 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1207 22:31:14.195245  377634 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 22:31:14.196309  377634 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1207 22:31:14.196342  377634 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1207 22:31:14.196338  377634 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 22:31:14.196350  377634 cache.go:65] Caching tarball of preloaded images
	I1207 22:31:14.196478  377634 preload.go:238] Found /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1207 22:31:14.196494  377634 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1207 22:31:14.196887  377634 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/config.json ...
	I1207 22:31:14.196918  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/config.json: {Name:mkd711f4a39ae29c1469946698f365f25da8671d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:14.211480  377634 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1207 22:31:14.211584  377634 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1207 22:31:14.211602  377634 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1207 22:31:14.211609  377634 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1207 22:31:14.211619  377634 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1207 22:31:14.211628  377634 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from local cache
	I1207 22:31:26.188448  377634 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from cached tarball
	I1207 22:31:26.188490  377634 cache.go:243] Successfully downloaded all kic artifacts
	I1207 22:31:26.188559  377634 start.go:360] acquireMachinesLock for addons-007439: {Name:mk0a328d31f3ea0280e3d48742116a2ba7bfc2ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 22:31:26.188671  377634 start.go:364] duration metric: took 86.787µs to acquireMachinesLock for "addons-007439"
	I1207 22:31:26.188724  377634 start.go:93] Provisioning new machine with config: &{Name:addons-007439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-007439 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1207 22:31:26.188820  377634 start.go:125] createHost starting for "" (driver="docker")
	I1207 22:31:26.190516  377634 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1207 22:31:26.190774  377634 start.go:159] libmachine.API.Create for "addons-007439" (driver="docker")
	I1207 22:31:26.190816  377634 client.go:173] LocalClient.Create starting
	I1207 22:31:26.190920  377634 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem
	I1207 22:31:26.218644  377634 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/cert.pem
	I1207 22:31:26.283656  377634 cli_runner.go:164] Run: docker network inspect addons-007439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 22:31:26.299907  377634 cli_runner.go:211] docker network inspect addons-007439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 22:31:26.299991  377634 network_create.go:284] running [docker network inspect addons-007439] to gather additional debugging logs...
	I1207 22:31:26.300010  377634 cli_runner.go:164] Run: docker network inspect addons-007439
	W1207 22:31:26.315119  377634 cli_runner.go:211] docker network inspect addons-007439 returned with exit code 1
	I1207 22:31:26.315144  377634 network_create.go:287] error running [docker network inspect addons-007439]: docker network inspect addons-007439: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-007439 not found
	I1207 22:31:26.315159  377634 network_create.go:289] output of [docker network inspect addons-007439]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-007439 not found
	
	** /stderr **
	I1207 22:31:26.315270  377634 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 22:31:26.332214  377634 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014c1df0}
	I1207 22:31:26.332254  377634 network_create.go:124] attempt to create docker network addons-007439 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1207 22:31:26.332295  377634 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-007439 addons-007439
	I1207 22:31:26.375270  377634 network_create.go:108] docker network addons-007439 192.168.49.0/24 created
	I1207 22:31:26.375297  377634 kic.go:121] calculated static IP "192.168.49.2" for the "addons-007439" container
	I1207 22:31:26.375373  377634 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 22:31:26.390820  377634 cli_runner.go:164] Run: docker volume create addons-007439 --label name.minikube.sigs.k8s.io=addons-007439 --label created_by.minikube.sigs.k8s.io=true
	I1207 22:31:26.407217  377634 oci.go:103] Successfully created a docker volume addons-007439
	I1207 22:31:26.407302  377634 cli_runner.go:164] Run: docker run --rm --name addons-007439-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-007439 --entrypoint /usr/bin/test -v addons-007439:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1207 22:31:32.482129  377634 cli_runner.go:217] Completed: docker run --rm --name addons-007439-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-007439 --entrypoint /usr/bin/test -v addons-007439:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (6.074770286s)
	I1207 22:31:32.482166  377634 oci.go:107] Successfully prepared a docker volume addons-007439
	I1207 22:31:32.482240  377634 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1207 22:31:32.482269  377634 kic.go:194] Starting extracting preloaded images to volume ...
	I1207 22:31:32.482342  377634 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-007439:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1207 22:31:36.173057  377634 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-007439:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.690665069s)
	I1207 22:31:36.173093  377634 kic.go:203] duration metric: took 3.690821337s to extract preloaded images to volume ...
	W1207 22:31:36.173207  377634 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1207 22:31:36.173252  377634 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1207 22:31:36.173301  377634 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 22:31:36.225955  377634 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-007439 --name addons-007439 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-007439 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-007439 --network addons-007439 --ip 192.168.49.2 --volume addons-007439:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1207 22:31:36.469349  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Running}}
	I1207 22:31:36.488298  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:36.506043  377634 cli_runner.go:164] Run: docker exec addons-007439 stat /var/lib/dpkg/alternatives/iptables
	I1207 22:31:36.548077  377634 oci.go:144] the created container "addons-007439" has a running status.
	I1207 22:31:36.548114  377634 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa...
	I1207 22:31:36.628425  377634 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 22:31:36.651401  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:36.667493  377634 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 22:31:36.667523  377634 kic_runner.go:114] Args: [docker exec --privileged addons-007439 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 22:31:36.712398  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:36.734452  377634 machine.go:94] provisionDockerMachine start ...
	I1207 22:31:36.734548  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:36.758138  377634 main.go:143] libmachine: Using SSH client type: native
	I1207 22:31:36.758553  377634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I1207 22:31:36.758611  377634 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 22:31:36.759278  377634 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56188->127.0.0.1:33145: read: connection reset by peer
	I1207 22:31:39.883161  377634 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-007439
	
	I1207 22:31:39.883189  377634 ubuntu.go:182] provisioning hostname "addons-007439"
	I1207 22:31:39.883260  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:39.900647  377634 main.go:143] libmachine: Using SSH client type: native
	I1207 22:31:39.900894  377634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I1207 22:31:39.900908  377634 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-007439 && echo "addons-007439" | sudo tee /etc/hostname
	I1207 22:31:40.033247  377634 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-007439
	
	I1207 22:31:40.033328  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:40.051011  377634 main.go:143] libmachine: Using SSH client type: native
	I1207 22:31:40.051212  377634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I1207 22:31:40.051227  377634 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-007439' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-007439/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-007439' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 22:31:40.174809  377634 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 22:31:40.174841  377634 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-372312/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-372312/.minikube}
	I1207 22:31:40.174876  377634 ubuntu.go:190] setting up certificates
	I1207 22:31:40.174887  377634 provision.go:84] configureAuth start
	I1207 22:31:40.174944  377634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-007439
	I1207 22:31:40.192667  377634 provision.go:143] copyHostCerts
	I1207 22:31:40.192762  377634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-372312/.minikube/cert.pem (1123 bytes)
	I1207 22:31:40.192908  377634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-372312/.minikube/key.pem (1679 bytes)
	I1207 22:31:40.193008  377634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-372312/.minikube/ca.pem (1082 bytes)
	I1207 22:31:40.193088  377634 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-372312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca-key.pem org=jenkins.addons-007439 san=[127.0.0.1 192.168.49.2 addons-007439 localhost minikube]
	I1207 22:31:40.290023  377634 provision.go:177] copyRemoteCerts
	I1207 22:31:40.290075  377634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 22:31:40.290107  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:40.306794  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:40.398002  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 22:31:40.415792  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1207 22:31:40.431954  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 22:31:40.448004  377634 provision.go:87] duration metric: took 273.102278ms to configureAuth
	I1207 22:31:40.448026  377634 ubuntu.go:206] setting minikube options for container-runtime
	I1207 22:31:40.448176  377634 config.go:182] Loaded profile config "addons-007439": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 22:31:40.448189  377634 machine.go:97] duration metric: took 3.713716331s to provisionDockerMachine
	I1207 22:31:40.448196  377634 client.go:176] duration metric: took 14.257372279s to LocalClient.Create
	I1207 22:31:40.448213  377634 start.go:167] duration metric: took 14.257441921s to libmachine.API.Create "addons-007439"
	I1207 22:31:40.448223  377634 start.go:293] postStartSetup for "addons-007439" (driver="docker")
	I1207 22:31:40.448231  377634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 22:31:40.448274  377634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 22:31:40.448304  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:40.465194  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:40.558676  377634 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 22:31:40.561924  377634 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 22:31:40.561960  377634 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 22:31:40.561996  377634 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-372312/.minikube/addons for local assets ...
	I1207 22:31:40.562060  377634 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-372312/.minikube/files for local assets ...
	I1207 22:31:40.562088  377634 start.go:296] duration metric: took 113.859744ms for postStartSetup
	I1207 22:31:40.562415  377634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-007439
	I1207 22:31:40.579687  377634 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/config.json ...
	I1207 22:31:40.579935  377634 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 22:31:40.579973  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:40.596102  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:40.684193  377634 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 22:31:40.688530  377634 start.go:128] duration metric: took 14.499693329s to createHost
	I1207 22:31:40.688557  377634 start.go:83] releasing machines lock for "addons-007439", held for 14.499867519s
	I1207 22:31:40.688633  377634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-007439
	I1207 22:31:40.705296  377634 ssh_runner.go:195] Run: cat /version.json
	I1207 22:31:40.705339  377634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 22:31:40.705345  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:40.705396  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:40.722684  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:40.723321  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:40.810214  377634 ssh_runner.go:195] Run: systemctl --version
	I1207 22:31:40.862529  377634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 22:31:40.866876  377634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 22:31:40.866936  377634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 22:31:40.890371  377634 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 22:31:40.890391  377634 start.go:496] detecting cgroup driver to use...
	I1207 22:31:40.890418  377634 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 22:31:40.890461  377634 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1207 22:31:40.903101  377634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1207 22:31:40.914203  377634 docker.go:218] disabling cri-docker service (if available) ...
	I1207 22:31:40.914242  377634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 22:31:40.928602  377634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 22:31:40.943847  377634 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 22:31:41.018127  377634 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 22:31:41.098293  377634 docker.go:234] disabling docker service ...
	I1207 22:31:41.098361  377634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 22:31:41.115634  377634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 22:31:41.126792  377634 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 22:31:41.206925  377634 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 22:31:41.285112  377634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 22:31:41.296219  377634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 22:31:41.309172  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1207 22:31:41.318956  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1207 22:31:41.327167  377634 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1207 22:31:41.327210  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1207 22:31:41.335229  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 22:31:41.343139  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1207 22:31:41.350866  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 22:31:41.358744  377634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 22:31:41.365972  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1207 22:31:41.373717  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1207 22:31:41.381538  377634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1207 22:31:41.389762  377634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 22:31:41.396424  377634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 22:31:41.403082  377634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:31:41.476754  377634 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1207 22:31:41.570761  377634 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1207 22:31:41.570835  377634 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1207 22:31:41.574635  377634 start.go:564] Will wait 60s for crictl version
	I1207 22:31:41.574678  377634 ssh_runner.go:195] Run: which crictl
	I1207 22:31:41.578062  377634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 22:31:41.600434  377634 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1207 22:31:41.600502  377634 ssh_runner.go:195] Run: containerd --version
	I1207 22:31:41.620447  377634 ssh_runner.go:195] Run: containerd --version
	I1207 22:31:41.642253  377634 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1207 22:31:41.643409  377634 cli_runner.go:164] Run: docker network inspect addons-007439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 22:31:41.659972  377634 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 22:31:41.663594  377634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 22:31:41.673250  377634 kubeadm.go:884] updating cluster {Name:addons-007439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-007439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 22:31:41.673388  377634 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1207 22:31:41.673440  377634 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 22:31:41.696938  377634 containerd.go:627] all images are preloaded for containerd runtime.
	I1207 22:31:41.696956  377634 containerd.go:534] Images already preloaded, skipping extraction
	I1207 22:31:41.696995  377634 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 22:31:41.718868  377634 containerd.go:627] all images are preloaded for containerd runtime.
	I1207 22:31:41.718889  377634 cache_images.go:86] Images are preloaded, skipping loading
	I1207 22:31:41.718898  377634 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 containerd true true} ...
	I1207 22:31:41.718994  377634 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-007439 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-007439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 22:31:41.719040  377634 ssh_runner.go:195] Run: sudo crictl info
	I1207 22:31:41.743660  377634 cni.go:84] Creating CNI manager for ""
	I1207 22:31:41.743690  377634 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 22:31:41.743742  377634 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 22:31:41.743777  377634 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-007439 NodeName:addons-007439 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 22:31:41.743934  377634 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-007439"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 22:31:41.744006  377634 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 22:31:41.751163  377634 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 22:31:41.751210  377634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 22:31:41.758100  377634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1207 22:31:41.769339  377634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 22:31:41.782734  377634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1207 22:31:41.793800  377634 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1207 22:31:41.796975  377634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 22:31:41.805955  377634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:31:41.883376  377634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 22:31:41.904773  377634 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439 for IP: 192.168.49.2
	I1207 22:31:41.904793  377634 certs.go:195] generating shared ca certs ...
	I1207 22:31:41.904812  377634 certs.go:227] acquiring lock for ca certs: {Name:mkfbf8cc1b70595007aee06919ab1fb852308b54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:41.904947  377634 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-372312/.minikube/ca.key
	I1207 22:31:42.130091  377634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt ...
	I1207 22:31:42.130130  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt: {Name:mk8dfbdfb75c8c0479e251862d216908e2b2b3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.130321  377634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-372312/.minikube/ca.key ...
	I1207 22:31:42.130344  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/ca.key: {Name:mk6a57f170ddef037fd2cfec70ffb4115970df50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.130455  377634 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.key
	I1207 22:31:42.166135  377634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.crt ...
	I1207 22:31:42.166156  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.crt: {Name:mk1808bea8f985a935bad71a78e2b0c077dd6970 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.166273  377634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.key ...
	I1207 22:31:42.166283  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.key: {Name:mkdba25eb46af80fa654f22940694d57b073ce1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.166351  377634 certs.go:257] generating profile certs ...
	I1207 22:31:42.166408  377634 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.key
	I1207 22:31:42.166421  377634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt with IP's: []
	I1207 22:31:42.258862  377634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt ...
	I1207 22:31:42.258884  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: {Name:mk47aa93b7952dc03abe35072814a68cf0a9a6ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.259012  377634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.key ...
	I1207 22:31:42.259023  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.key: {Name:mk0dbf413ad77f8502a2b9c12fce4a6e1a599682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.259112  377634 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.key.be306546
	I1207 22:31:42.259133  377634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.crt.be306546 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1207 22:31:42.339199  377634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.crt.be306546 ...
	I1207 22:31:42.339218  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.crt.be306546: {Name:mkce0b61820000a6ea0be206a089dfae591a3b13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.339330  377634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.key.be306546 ...
	I1207 22:31:42.339342  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.key.be306546: {Name:mkac293cbdbd7d1e7dbb5dd59a463a45c1c2fc0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.339408  377634 certs.go:382] copying /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.crt.be306546 -> /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.crt
	I1207 22:31:42.339491  377634 certs.go:386] copying /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.key.be306546 -> /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.key
	I1207 22:31:42.339546  377634 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.key
	I1207 22:31:42.339563  377634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.crt with IP's: []
	I1207 22:31:42.470469  377634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.crt ...
	I1207 22:31:42.470488  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.crt: {Name:mk1ccdb83dc0949c0c9aa38955a9de59a9a129d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.470603  377634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.key ...
	I1207 22:31:42.470614  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.key: {Name:mk2faec993162ad0b712cd77d17256d23a1bcabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:42.470781  377634 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 22:31:42.470820  377634 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem (1082 bytes)
	I1207 22:31:42.470851  377634 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/cert.pem (1123 bytes)
	I1207 22:31:42.470878  377634 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/key.pem (1679 bytes)
	I1207 22:31:42.471419  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 22:31:42.489246  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 22:31:42.505842  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 22:31:42.522237  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1207 22:31:42.538105  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1207 22:31:42.553666  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 22:31:42.569294  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 22:31:42.585069  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 22:31:42.600618  377634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 22:31:42.619113  377634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 22:31:42.630420  377634 ssh_runner.go:195] Run: openssl version
	I1207 22:31:42.636124  377634 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:31:42.642720  377634 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 22:31:42.651336  377634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:31:42.654526  377634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:31 /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:31:42.654567  377634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:31:42.687495  377634 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 22:31:42.694211  377634 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 22:31:42.700799  377634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 22:31:42.703946  377634 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 22:31:42.703991  377634 kubeadm.go:401] StartCluster: {Name:addons-007439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-007439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:31:42.704084  377634 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1207 22:31:42.704137  377634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:31:42.729420  377634 cri.go:89] found id: ""
	I1207 22:31:42.729471  377634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 22:31:42.736488  377634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 22:31:42.743600  377634 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 22:31:42.743642  377634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 22:31:42.750528  377634 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 22:31:42.750543  377634 kubeadm.go:158] found existing configuration files:
	
	I1207 22:31:42.750579  377634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 22:31:42.757285  377634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 22:31:42.757332  377634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 22:31:42.763931  377634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 22:31:42.770781  377634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 22:31:42.770832  377634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 22:31:42.777578  377634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 22:31:42.784446  377634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 22:31:42.784501  377634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 22:31:42.791268  377634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 22:31:42.798206  377634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 22:31:42.798247  377634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 22:31:42.805865  377634 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 22:31:42.844360  377634 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1207 22:31:42.844448  377634 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 22:31:42.862864  377634 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 22:31:42.862989  377634 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 22:31:42.863049  377634 kubeadm.go:319] OS: Linux
	I1207 22:31:42.863120  377634 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 22:31:42.863181  377634 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 22:31:42.863254  377634 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 22:31:42.863332  377634 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 22:31:42.863405  377634 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 22:31:42.863481  377634 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 22:31:42.863558  377634 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 22:31:42.863597  377634 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 22:31:42.917066  377634 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 22:31:42.917193  377634 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 22:31:42.917304  377634 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 22:31:42.921884  377634 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 22:31:42.923828  377634 out.go:252]   - Generating certificates and keys ...
	I1207 22:31:42.923918  377634 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 22:31:42.924002  377634 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 22:31:43.049771  377634 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 22:31:43.369053  377634 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 22:31:43.427339  377634 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 22:31:43.502587  377634 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 22:31:43.635435  377634 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 22:31:43.635574  377634 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-007439 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1207 22:31:43.909608  377634 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 22:31:43.909794  377634 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-007439 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1207 22:31:44.142915  377634 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 22:31:44.161353  377634 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 22:31:44.322435  377634 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 22:31:44.322582  377634 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 22:31:44.400575  377634 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 22:31:44.959931  377634 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 22:31:45.280678  377634 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 22:31:45.585157  377634 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 22:31:45.953829  377634 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 22:31:45.954241  377634 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 22:31:45.958871  377634 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 22:31:45.960210  377634 out.go:252]   - Booting up control plane ...
	I1207 22:31:45.960298  377634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 22:31:45.960365  377634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 22:31:45.960975  377634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 22:31:45.974662  377634 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 22:31:45.974823  377634 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 22:31:45.980677  377634 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 22:31:45.980971  377634 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 22:31:45.981021  377634 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 22:31:46.076366  377634 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 22:31:46.076517  377634 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1207 22:31:47.078047  377634 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00177457s
	I1207 22:31:47.081499  377634 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 22:31:47.081631  377634 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1207 22:31:47.081751  377634 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 22:31:47.081831  377634 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 22:31:48.357388  377634 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.275763293s
	I1207 22:31:48.843156  377634 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.761626171s
	I1207 22:31:50.582956  377634 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501331108s
	I1207 22:31:50.600935  377634 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 22:31:50.609203  377634 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 22:31:50.617404  377634 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 22:31:50.617634  377634 kubeadm.go:319] [mark-control-plane] Marking the node addons-007439 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 22:31:50.625104  377634 kubeadm.go:319] [bootstrap-token] Using token: izaz1b.uyzwo916dp9ka22u
	I1207 22:31:50.626539  377634 out.go:252]   - Configuring RBAC rules ...
	I1207 22:31:50.626688  377634 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 22:31:50.629435  377634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 22:31:50.633988  377634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 22:31:50.636884  377634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 22:31:50.638869  377634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 22:31:50.640996  377634 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 22:31:50.988581  377634 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 22:31:51.400758  377634 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1207 22:31:51.988831  377634 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1207 22:31:51.989575  377634 kubeadm.go:319] 
	I1207 22:31:51.989637  377634 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1207 22:31:51.989647  377634 kubeadm.go:319] 
	I1207 22:31:51.989791  377634 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1207 22:31:51.989817  377634 kubeadm.go:319] 
	I1207 22:31:51.989858  377634 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1207 22:31:51.989965  377634 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 22:31:51.990047  377634 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 22:31:51.990057  377634 kubeadm.go:319] 
	I1207 22:31:51.990129  377634 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1207 22:31:51.990138  377634 kubeadm.go:319] 
	I1207 22:31:51.990207  377634 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 22:31:51.990216  377634 kubeadm.go:319] 
	I1207 22:31:51.990296  377634 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1207 22:31:51.990407  377634 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 22:31:51.990490  377634 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 22:31:51.990507  377634 kubeadm.go:319] 
	I1207 22:31:51.990644  377634 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 22:31:51.990793  377634 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1207 22:31:51.990804  377634 kubeadm.go:319] 
	I1207 22:31:51.990918  377634 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token izaz1b.uyzwo916dp9ka22u \
	I1207 22:31:51.991066  377634 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:85ac6ff048e2b2646b65f112736b58cb81d3dd5c10b3ad35b3c1aa0187f127e5 \
	I1207 22:31:51.991099  377634 kubeadm.go:319] 	--control-plane 
	I1207 22:31:51.991107  377634 kubeadm.go:319] 
	I1207 22:31:51.991174  377634 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1207 22:31:51.991179  377634 kubeadm.go:319] 
	I1207 22:31:51.991248  377634 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token izaz1b.uyzwo916dp9ka22u \
	I1207 22:31:51.991329  377634 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:85ac6ff048e2b2646b65f112736b58cb81d3dd5c10b3ad35b3c1aa0187f127e5 
	I1207 22:31:51.993582  377634 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1207 22:31:51.993688  377634 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 22:31:51.993738  377634 cni.go:84] Creating CNI manager for ""
	I1207 22:31:51.993752  377634 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 22:31:51.996153  377634 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1207 22:31:51.997267  377634 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1207 22:31:52.001486  377634 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1207 22:31:52.001501  377634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1207 22:31:52.014187  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 22:31:52.205777  377634 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 22:31:52.205858  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:52.205881  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-007439 minikube.k8s.io/updated_at=2025_12_07T22_31_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=addons-007439 minikube.k8s.io/primary=true
	I1207 22:31:52.295296  377634 ops.go:34] apiserver oom_adj: -16
	I1207 22:31:52.295475  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:52.795974  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:53.295942  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:53.796173  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:54.295582  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:54.795800  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:55.296147  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:55.796072  377634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:31:55.858788  377634 kubeadm.go:1114] duration metric: took 3.652991512s to wait for elevateKubeSystemPrivileges
	I1207 22:31:55.858822  377634 kubeadm.go:403] duration metric: took 13.154834236s to StartCluster
	I1207 22:31:55.858846  377634 settings.go:142] acquiring lock: {Name:mk7216554e80629b12e2915c4ff3500dbf3bd589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:55.858960  377634 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 22:31:55.859369  377634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/kubeconfig: {Name:mkfa57fc45c954633f03c454814d9663bdf56b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:31:55.859582  377634 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 22:31:55.859621  377634 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1207 22:31:55.859663  377634 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1207 22:31:55.859814  377634 addons.go:70] Setting yakd=true in profile "addons-007439"
	I1207 22:31:55.859832  377634 addons.go:70] Setting default-storageclass=true in profile "addons-007439"
	I1207 22:31:55.859847  377634 addons.go:70] Setting registry-creds=true in profile "addons-007439"
	I1207 22:31:55.859859  377634 addons.go:239] Setting addon registry-creds=true in "addons-007439"
	I1207 22:31:55.859862  377634 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-007439"
	I1207 22:31:55.859861  377634 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-007439"
	I1207 22:31:55.859885  377634 addons.go:70] Setting inspektor-gadget=true in profile "addons-007439"
	I1207 22:31:55.859894  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859906  377634 addons.go:70] Setting ingress-dns=true in profile "addons-007439"
	I1207 22:31:55.859921  377634 addons.go:239] Setting addon ingress-dns=true in "addons-007439"
	I1207 22:31:55.859838  377634 addons.go:239] Setting addon yakd=true in "addons-007439"
	I1207 22:31:55.859951  377634 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-007439"
	I1207 22:31:55.859956  377634 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-007439"
	I1207 22:31:55.859965  377634 addons.go:70] Setting storage-provisioner=true in profile "addons-007439"
	I1207 22:31:55.859966  377634 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-007439"
	I1207 22:31:55.859976  377634 addons.go:239] Setting addon storage-provisioner=true in "addons-007439"
	I1207 22:31:55.859986  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859990  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859990  377634 addons.go:70] Setting volcano=true in profile "addons-007439"
	I1207 22:31:55.859994  377634 addons.go:70] Setting cloud-spanner=true in profile "addons-007439"
	I1207 22:31:55.859869  377634 config.go:182] Loaded profile config "addons-007439": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 22:31:55.860003  377634 addons.go:239] Setting addon volcano=true in "addons-007439"
	I1207 22:31:55.860013  377634 addons.go:239] Setting addon cloud-spanner=true in "addons-007439"
	I1207 22:31:55.860029  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.860042  377634 addons.go:70] Setting volumesnapshots=true in profile "addons-007439"
	I1207 22:31:55.859993  377634 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-007439"
	I1207 22:31:55.860079  377634 addons.go:239] Setting addon volumesnapshots=true in "addons-007439"
	I1207 22:31:55.860096  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.860101  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.860236  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.859898  377634 addons.go:239] Setting addon inspektor-gadget=true in "addons-007439"
	I1207 22:31:55.860307  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.860407  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.860478  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.860506  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.860542  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.859957  377634 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-007439"
	I1207 22:31:55.860609  377634 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-007439"
	I1207 22:31:55.860646  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.860786  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.860891  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.859984  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.860030  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.861973  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.862114  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.860407  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.862180  377634 out.go:179] * Verifying Kubernetes components...
	I1207 22:31:55.859957  377634 addons.go:70] Setting metrics-server=true in profile "addons-007439"
	I1207 22:31:55.862307  377634 addons.go:239] Setting addon metrics-server=true in "addons-007439"
	I1207 22:31:55.862335  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859819  377634 addons.go:70] Setting ingress=true in profile "addons-007439"
	I1207 22:31:55.862420  377634 addons.go:239] Setting addon ingress=true in "addons-007439"
	I1207 22:31:55.862466  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859947  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859971  377634 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-007439"
	I1207 22:31:55.862749  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859909  377634 addons.go:70] Setting registry=true in profile "addons-007439"
	I1207 22:31:55.862967  377634 addons.go:239] Setting addon registry=true in "addons-007439"
	I1207 22:31:55.863000  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.859946  377634 addons.go:70] Setting gcp-auth=true in profile "addons-007439"
	I1207 22:31:55.863158  377634 mustload.go:66] Loading cluster: addons-007439
	I1207 22:31:55.863634  377634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:31:55.868356  377634 config.go:182] Loaded profile config "addons-007439": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 22:31:55.869009  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.871345  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.871956  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.872478  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.875249  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.875893  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.898872  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1207 22:31:55.899035  377634 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1207 22:31:55.899229  377634 addons.go:239] Setting addon default-storageclass=true in "addons-007439"
	I1207 22:31:55.899281  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.900139  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.900393  377634 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1207 22:31:55.900410  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1207 22:31:55.900458  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.901619  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1207 22:31:55.902534  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1207 22:31:55.904317  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1207 22:31:55.906166  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1207 22:31:55.907183  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1207 22:31:55.909645  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1207 22:31:55.910536  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1207 22:31:55.912687  377634 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1207 22:31:55.912773  377634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1207 22:31:55.912922  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.935727  377634 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1207 22:31:55.936863  377634 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1207 22:31:55.936931  377634 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1207 22:31:55.937040  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.938374  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.948529  377634 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 22:31:55.948670  377634 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1207 22:31:55.949967  377634 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1207 22:31:55.950026  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1207 22:31:55.950131  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.950678  377634 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 22:31:55.950859  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 22:31:55.950914  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.962416  377634 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-007439"
	I1207 22:31:55.962473  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:31:55.966628  377634 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1207 22:31:55.966688  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:31:55.967492  377634 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1207 22:31:55.969441  377634 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1207 22:31:55.971487  377634 out.go:179]   - Using image docker.io/registry:3.0.0
	I1207 22:31:55.971539  377634 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1207 22:31:55.971555  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1207 22:31:55.971598  377634 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1207 22:31:55.971609  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1207 22:31:55.971612  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.971660  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.971886  377634 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1207 22:31:55.972439  377634 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1207 22:31:55.972452  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1207 22:31:55.972496  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.972989  377634 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1207 22:31:55.973007  377634 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1207 22:31:55.973068  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.974069  377634 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1207 22:31:55.974969  377634 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1207 22:31:55.974986  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1207 22:31:55.975029  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.977295  377634 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1207 22:31:55.980746  377634 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1207 22:31:55.982082  377634 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1207 22:31:55.983148  377634 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1207 22:31:55.984057  377634 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1207 22:31:55.984439  377634 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1207 22:31:55.985186  377634 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 22:31:55.985204  377634 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 22:31:55.985262  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.985976  377634 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1207 22:31:55.985989  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1207 22:31:55.986044  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:55.989511  377634 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1207 22:31:56.000662  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.002832  377634 addons.go:436] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1207 22:31:56.002860  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1207 22:31:56.002931  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:56.008834  377634 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 22:31:56.008856  377634 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 22:31:56.008919  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:56.019133  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.019391  377634 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 22:31:56.021090  377634 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1207 22:31:56.022268  377634 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1207 22:31:56.022335  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1207 22:31:56.022423  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:56.026394  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.028339  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.031562  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.037000  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.060489  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.065924  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.066572  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.070994  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.071476  377634 out.go:179]   - Using image docker.io/busybox:stable
	I1207 22:31:56.072753  377634 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1207 22:31:56.072939  377634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 22:31:56.073774  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.074003  377634 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1207 22:31:56.074023  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1207 22:31:56.074076  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:31:56.074680  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.076648  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	W1207 22:31:56.077126  377634 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1207 22:31:56.077153  377634 retry.go:31] will retry after 139.18215ms: ssh: handshake failed: EOF
	W1207 22:31:56.078110  377634 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1207 22:31:56.078132  377634 retry.go:31] will retry after 281.656254ms: ssh: handshake failed: EOF
	W1207 22:31:56.079315  377634 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1207 22:31:56.079334  377634 retry.go:31] will retry after 211.601156ms: ssh: handshake failed: EOF
	I1207 22:31:56.086432  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.093777  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	W1207 22:31:56.097728  377634 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1207 22:31:56.097750  377634 retry.go:31] will retry after 139.387321ms: ssh: handshake failed: EOF
	I1207 22:31:56.106146  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:31:56.165612  377634 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1207 22:31:56.165635  377634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1207 22:31:56.173159  377634 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1207 22:31:56.173185  377634 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1207 22:31:56.180931  377634 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1207 22:31:56.180955  377634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1207 22:31:56.190950  377634 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1207 22:31:56.190970  377634 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1207 22:31:56.191007  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1207 22:31:56.194735  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 22:31:56.195044  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1207 22:31:56.204753  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1207 22:31:56.208926  377634 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1207 22:31:56.209000  377634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1207 22:31:56.214174  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1207 22:31:56.215725  377634 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1207 22:31:56.215746  377634 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1207 22:31:56.217842  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1207 22:31:56.222733  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1207 22:31:56.230262  377634 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1207 22:31:56.230284  377634 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1207 22:31:56.232242  377634 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1207 22:31:56.232285  377634 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1207 22:31:56.243923  377634 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1207 22:31:56.243968  377634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1207 22:31:56.247826  377634 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1207 22:31:56.247853  377634 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1207 22:31:56.256272  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1207 22:31:56.272404  377634 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1207 22:31:56.272431  377634 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1207 22:31:56.274678  377634 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1207 22:31:56.274713  377634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1207 22:31:56.290791  377634 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 22:31:56.290820  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1207 22:31:56.301498  377634 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1207 22:31:56.301527  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1207 22:31:56.320178  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 22:31:56.340683  377634 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1207 22:31:56.340735  377634 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1207 22:31:56.348774  377634 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1207 22:31:56.348860  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1207 22:31:56.352926  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1207 22:31:56.391201  377634 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1207 22:31:56.391227  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1207 22:31:56.391358  377634 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1207 22:31:56.391388  377634 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1207 22:31:56.421965  377634 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 22:31:56.421992  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1207 22:31:56.429207  377634 node_ready.go:35] waiting up to 6m0s for node "addons-007439" to be "Ready" ...
	I1207 22:31:56.429561  377634 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1207 22:31:56.451140  377634 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1207 22:31:56.451167  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1207 22:31:56.463519  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1207 22:31:56.484183  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1207 22:31:56.487099  377634 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 22:31:56.487126  377634 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 22:31:56.515981  377634 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1207 22:31:56.516013  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1207 22:31:56.520805  377634 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 22:31:56.520834  377634 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 22:31:56.525493  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1207 22:31:56.573406  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 22:31:56.580258  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 22:31:56.642559  377634 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1207 22:31:56.642594  377634 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1207 22:31:56.681563  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1207 22:31:56.936397  377634 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-007439" context rescaled to 1 replicas
	I1207 22:31:57.597447  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.37467948s)
	I1207 22:31:57.597480  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.379601935s)
	I1207 22:31:57.597511  377634 addons.go:495] Verifying addon ingress=true in "addons-007439"
	I1207 22:31:57.597547  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.341243964s)
	I1207 22:31:57.599097  377634 out.go:179] * Verifying ingress addon...
	I1207 22:31:57.601480  377634 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1207 22:31:57.604541  377634 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1207 22:31:57.827744  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.507515s)
	W1207 22:31:57.827792  377634 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1207 22:31:57.827817  377634 retry.go:31] will retry after 286.656289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1207 22:31:57.827843  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.474874637s)
	I1207 22:31:57.827905  377634 addons.go:495] Verifying addon registry=true in "addons-007439"
	I1207 22:31:57.827938  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.364385097s)
	I1207 22:31:57.828007  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.343785726s)
	I1207 22:31:57.829419  377634 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-007439 service yakd-dashboard -n yakd-dashboard
	
	I1207 22:31:57.829428  377634 out.go:179] * Verifying registry addon...
	I1207 22:31:57.831322  377634 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1207 22:31:57.834270  377634 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1207 22:31:57.834280  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:31:58.106720  377634 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1207 22:31:58.106749  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:31:58.114637  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 22:31:58.340149  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (1.814618285s)
	I1207 22:31:58.340266  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.766829512s)
	I1207 22:31:58.340287  377634 addons.go:495] Verifying addon metrics-server=true in "addons-007439"
	I1207 22:31:58.340332  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.760048048s)
	I1207 22:31:58.340734  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.659101212s)
	I1207 22:31:58.340774  377634 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-007439"
	I1207 22:31:58.342015  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:31:58.342386  377634 out.go:179] * Verifying csi-hostpath-driver addon...
	I1207 22:31:58.344647  377634 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1207 22:31:58.349359  377634 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1207 22:31:58.349381  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1207 22:31:58.432076  377634 node_ready.go:57] node "addons-007439" has "Ready":"False" status (will retry)
	I1207 22:31:58.605689  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:31:58.833930  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:31:58.846809  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:31:59.104833  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:31:59.334647  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:31:59.435997  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:31:59.605290  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:31:59.833597  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:31:59.847689  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:00.105488  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:00.333786  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:00.434377  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:00.605149  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:00.681062  377634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.566375933s)
	I1207 22:32:00.833911  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:00.846788  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1207 22:32:00.931865  377634 node_ready.go:57] node "addons-007439" has "Ready":"False" status (will retry)
	I1207 22:32:01.104658  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:01.334741  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:01.435717  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:01.604183  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:01.834410  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:01.847319  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:02.105145  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:02.334660  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:02.434965  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:02.604534  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:02.834314  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:02.847546  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:03.104666  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:03.334520  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1207 22:32:03.431302  377634 node_ready.go:57] node "addons-007439" has "Ready":"False" status (will retry)
	I1207 22:32:03.435188  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:03.545404  377634 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1207 22:32:03.545478  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:32:03.562455  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:32:03.605382  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:03.660272  377634 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1207 22:32:03.672046  377634 addons.go:239] Setting addon gcp-auth=true in "addons-007439"
	I1207 22:32:03.672097  377634 host.go:66] Checking if "addons-007439" exists ...
	I1207 22:32:03.672450  377634 cli_runner.go:164] Run: docker container inspect addons-007439 --format={{.State.Status}}
	I1207 22:32:03.689008  377634 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1207 22:32:03.689057  377634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-007439
	I1207 22:32:03.705716  377634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/addons-007439/id_rsa Username:docker}
	I1207 22:32:03.795424  377634 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1207 22:32:03.796647  377634 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1207 22:32:03.797799  377634 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1207 22:32:03.797817  377634 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1207 22:32:03.810327  377634 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1207 22:32:03.810349  377634 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1207 22:32:03.822522  377634 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1207 22:32:03.822541  377634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1207 22:32:03.833962  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:03.834960  377634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1207 22:32:03.847291  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:04.104516  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:04.130667  377634 addons.go:495] Verifying addon gcp-auth=true in "addons-007439"
	I1207 22:32:04.131903  377634 out.go:179] * Verifying gcp-auth addon...
	I1207 22:32:04.135414  377634 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1207 22:32:04.204719  377634 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1207 22:32:04.204736  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:04.333926  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:04.346881  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:04.604254  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:04.638337  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:04.833745  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:04.846689  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:05.104848  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:05.137887  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:05.334373  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:05.347396  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:05.605427  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:05.638493  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:05.833925  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:05.847042  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1207 22:32:05.932127  377634 node_ready.go:57] node "addons-007439" has "Ready":"False" status (will retry)
	I1207 22:32:06.105066  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:06.138110  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:06.334385  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:06.347290  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:06.604087  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:06.637881  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:06.834122  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:06.847136  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:07.104057  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:07.138143  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:07.334832  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:07.346679  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:07.604984  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:07.638029  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:07.834591  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:07.847688  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:08.104254  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:08.138452  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:08.333827  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:08.346889  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1207 22:32:08.432063  377634 node_ready.go:57] node "addons-007439" has "Ready":"False" status (will retry)
	I1207 22:32:08.605098  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:08.642095  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:08.833496  377634 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1207 22:32:08.833514  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:08.847703  377634 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1207 22:32:08.847721  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:08.932985  377634 node_ready.go:49] node "addons-007439" is "Ready"
	I1207 22:32:08.933018  377634 node_ready.go:38] duration metric: took 12.503775231s for node "addons-007439" to be "Ready" ...
	I1207 22:32:08.933036  377634 api_server.go:52] waiting for apiserver process to appear ...
	I1207 22:32:08.933089  377634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 22:32:08.955562  377634 api_server.go:72] duration metric: took 13.095896356s to wait for apiserver process to appear ...
	I1207 22:32:08.955610  377634 api_server.go:88] waiting for apiserver healthz status ...
	I1207 22:32:08.955637  377634 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1207 22:32:08.962732  377634 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1207 22:32:08.964159  377634 api_server.go:141] control plane version: v1.34.2
	I1207 22:32:08.964194  377634 api_server.go:131] duration metric: took 8.574483ms to wait for apiserver health ...
	I1207 22:32:08.964207  377634 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 22:32:08.970744  377634 system_pods.go:59] 20 kube-system pods found
	I1207 22:32:08.970859  377634 system_pods.go:61] "amd-gpu-device-plugin-vxzwd" [a0965c42-1d83-4d06-a17b-c97d11433f73] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:32:08.970874  377634 system_pods.go:61] "coredns-66bc5c9577-qv27l" [b54c5507-48ac-46cd-a418-ad889f0c4081] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:32:08.970885  377634 system_pods.go:61] "csi-hostpath-attacher-0" [ad3bde9a-0070-4465-afac-ee4c7efa4317] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:32:08.970893  377634 system_pods.go:61] "csi-hostpath-resizer-0" [a94e96e8-fca6-4a06-8765-13d2ef93a8da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:32:08.970902  377634 system_pods.go:61] "csi-hostpathplugin-rsn9j" [f690cddd-5270-404d-bd20-3beaf2f88e57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:32:08.970921  377634 system_pods.go:61] "etcd-addons-007439" [2632aa09-8415-49c7-a35f-956cd4d45c1a] Running
	I1207 22:32:08.970926  377634 system_pods.go:61] "kindnet-8t6g6" [2803a223-e9a2-4daa-9b70-2a094a8bfcf9] Running
	I1207 22:32:08.970931  377634 system_pods.go:61] "kube-apiserver-addons-007439" [33792473-a7d5-4e88-8255-f807cad0f40c] Running
	I1207 22:32:08.970936  377634 system_pods.go:61] "kube-controller-manager-addons-007439" [09d14a1d-243d-474b-a662-3178f0387ad2] Running
	I1207 22:32:08.970944  377634 system_pods.go:61] "kube-ingress-dns-minikube" [b977131b-fcf0-4612-af35-8b9e761cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:32:08.970950  377634 system_pods.go:61] "kube-proxy-724nh" [d18f83a2-2b43-4c5b-b426-35611913b0fe] Running
	I1207 22:32:08.970956  377634 system_pods.go:61] "kube-scheduler-addons-007439" [ff7ef5b1-4778-4a9e-8e65-25eb520a768d] Running
	I1207 22:32:08.970963  377634 system_pods.go:61] "metrics-server-85b7d694d7-xclmv" [127d9996-068b-4d23-8215-895807a75987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:32:08.970971  377634 system_pods.go:61] "nvidia-device-plugin-daemonset-ntq2v" [0b9135a9-fc92-4f62-9720-da68d90dcd42] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:32:08.970979  377634 system_pods.go:61] "registry-6b586f9694-6p6bs" [0a78b775-c4c2-437d-8f4d-28a45858c31d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:32:08.970992  377634 system_pods.go:61] "registry-creds-764b6fb674-khbsn" [2604cd75-b452-415c-a35a-1f782ae157c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:32:08.971004  377634 system_pods.go:61] "registry-proxy-7n2kv" [03a65721-16eb-45cd-9dda-3c1a53f037f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:32:08.971015  377634 system_pods.go:61] "snapshot-controller-7d9fbc56b8-86qjb" [0ed800fc-182e-493f-aacb-7fb2337b1f90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:08.971023  377634 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wzrfv" [81da8183-509c-4ce8-89dd-c5890c19d49c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:08.971031  377634 system_pods.go:61] "storage-provisioner" [62bc9461-6c42-46bb-829b-86b3cbd143b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:32:08.971039  377634 system_pods.go:74] duration metric: took 6.824307ms to wait for pod list to return data ...
	I1207 22:32:08.971051  377634 default_sa.go:34] waiting for default service account to be created ...
	I1207 22:32:08.974302  377634 default_sa.go:45] found service account: "default"
	I1207 22:32:08.974352  377634 default_sa.go:55] duration metric: took 3.293515ms for default service account to be created ...
	I1207 22:32:08.974374  377634 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 22:32:08.980352  377634 system_pods.go:86] 20 kube-system pods found
	I1207 22:32:08.980425  377634 system_pods.go:89] "amd-gpu-device-plugin-vxzwd" [a0965c42-1d83-4d06-a17b-c97d11433f73] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:32:08.980447  377634 system_pods.go:89] "coredns-66bc5c9577-qv27l" [b54c5507-48ac-46cd-a418-ad889f0c4081] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:32:08.980466  377634 system_pods.go:89] "csi-hostpath-attacher-0" [ad3bde9a-0070-4465-afac-ee4c7efa4317] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:32:08.980484  377634 system_pods.go:89] "csi-hostpath-resizer-0" [a94e96e8-fca6-4a06-8765-13d2ef93a8da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:32:08.980518  377634 system_pods.go:89] "csi-hostpathplugin-rsn9j" [f690cddd-5270-404d-bd20-3beaf2f88e57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:32:08.980539  377634 system_pods.go:89] "etcd-addons-007439" [2632aa09-8415-49c7-a35f-956cd4d45c1a] Running
	I1207 22:32:08.980561  377634 system_pods.go:89] "kindnet-8t6g6" [2803a223-e9a2-4daa-9b70-2a094a8bfcf9] Running
	I1207 22:32:08.980578  377634 system_pods.go:89] "kube-apiserver-addons-007439" [33792473-a7d5-4e88-8255-f807cad0f40c] Running
	I1207 22:32:08.980594  377634 system_pods.go:89] "kube-controller-manager-addons-007439" [09d14a1d-243d-474b-a662-3178f0387ad2] Running
	I1207 22:32:08.980610  377634 system_pods.go:89] "kube-ingress-dns-minikube" [b977131b-fcf0-4612-af35-8b9e761cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:32:08.980625  377634 system_pods.go:89] "kube-proxy-724nh" [d18f83a2-2b43-4c5b-b426-35611913b0fe] Running
	I1207 22:32:08.980646  377634 system_pods.go:89] "kube-scheduler-addons-007439" [ff7ef5b1-4778-4a9e-8e65-25eb520a768d] Running
	I1207 22:32:08.980664  377634 system_pods.go:89] "metrics-server-85b7d694d7-xclmv" [127d9996-068b-4d23-8215-895807a75987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:32:08.980680  377634 system_pods.go:89] "nvidia-device-plugin-daemonset-ntq2v" [0b9135a9-fc92-4f62-9720-da68d90dcd42] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:32:08.980717  377634 system_pods.go:89] "registry-6b586f9694-6p6bs" [0a78b775-c4c2-437d-8f4d-28a45858c31d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:32:08.980750  377634 system_pods.go:89] "registry-creds-764b6fb674-khbsn" [2604cd75-b452-415c-a35a-1f782ae157c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:32:08.980770  377634 system_pods.go:89] "registry-proxy-7n2kv" [03a65721-16eb-45cd-9dda-3c1a53f037f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:32:08.980787  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86qjb" [0ed800fc-182e-493f-aacb-7fb2337b1f90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:08.980804  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzrfv" [81da8183-509c-4ce8-89dd-c5890c19d49c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:08.980826  377634 system_pods.go:89] "storage-provisioner" [62bc9461-6c42-46bb-829b-86b3cbd143b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:32:08.980857  377634 retry.go:31] will retry after 208.825674ms: missing components: kube-dns
	I1207 22:32:09.105639  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:09.140036  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:09.199864  377634 system_pods.go:86] 20 kube-system pods found
	I1207 22:32:09.199914  377634 system_pods.go:89] "amd-gpu-device-plugin-vxzwd" [a0965c42-1d83-4d06-a17b-c97d11433f73] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:32:09.199946  377634 system_pods.go:89] "coredns-66bc5c9577-qv27l" [b54c5507-48ac-46cd-a418-ad889f0c4081] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:32:09.199959  377634 system_pods.go:89] "csi-hostpath-attacher-0" [ad3bde9a-0070-4465-afac-ee4c7efa4317] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:32:09.199977  377634 system_pods.go:89] "csi-hostpath-resizer-0" [a94e96e8-fca6-4a06-8765-13d2ef93a8da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:32:09.199987  377634 system_pods.go:89] "csi-hostpathplugin-rsn9j" [f690cddd-5270-404d-bd20-3beaf2f88e57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:32:09.199994  377634 system_pods.go:89] "etcd-addons-007439" [2632aa09-8415-49c7-a35f-956cd4d45c1a] Running
	I1207 22:32:09.200006  377634 system_pods.go:89] "kindnet-8t6g6" [2803a223-e9a2-4daa-9b70-2a094a8bfcf9] Running
	I1207 22:32:09.200020  377634 system_pods.go:89] "kube-apiserver-addons-007439" [33792473-a7d5-4e88-8255-f807cad0f40c] Running
	I1207 22:32:09.200027  377634 system_pods.go:89] "kube-controller-manager-addons-007439" [09d14a1d-243d-474b-a662-3178f0387ad2] Running
	I1207 22:32:09.200041  377634 system_pods.go:89] "kube-ingress-dns-minikube" [b977131b-fcf0-4612-af35-8b9e761cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:32:09.200047  377634 system_pods.go:89] "kube-proxy-724nh" [d18f83a2-2b43-4c5b-b426-35611913b0fe] Running
	I1207 22:32:09.200054  377634 system_pods.go:89] "kube-scheduler-addons-007439" [ff7ef5b1-4778-4a9e-8e65-25eb520a768d] Running
	I1207 22:32:09.200062  377634 system_pods.go:89] "metrics-server-85b7d694d7-xclmv" [127d9996-068b-4d23-8215-895807a75987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:32:09.200075  377634 system_pods.go:89] "nvidia-device-plugin-daemonset-ntq2v" [0b9135a9-fc92-4f62-9720-da68d90dcd42] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:32:09.200084  377634 system_pods.go:89] "registry-6b586f9694-6p6bs" [0a78b775-c4c2-437d-8f4d-28a45858c31d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:32:09.200096  377634 system_pods.go:89] "registry-creds-764b6fb674-khbsn" [2604cd75-b452-415c-a35a-1f782ae157c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:32:09.200105  377634 system_pods.go:89] "registry-proxy-7n2kv" [03a65721-16eb-45cd-9dda-3c1a53f037f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:32:09.200115  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86qjb" [0ed800fc-182e-493f-aacb-7fb2337b1f90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:09.200129  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzrfv" [81da8183-509c-4ce8-89dd-c5890c19d49c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:09.200140  377634 system_pods.go:89] "storage-provisioner" [62bc9461-6c42-46bb-829b-86b3cbd143b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:32:09.200160  377634 retry.go:31] will retry after 328.124289ms: missing components: kube-dns
	I1207 22:32:09.337143  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:09.438022  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:09.539152  377634 system_pods.go:86] 20 kube-system pods found
	I1207 22:32:09.539184  377634 system_pods.go:89] "amd-gpu-device-plugin-vxzwd" [a0965c42-1d83-4d06-a17b-c97d11433f73] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:32:09.539191  377634 system_pods.go:89] "coredns-66bc5c9577-qv27l" [b54c5507-48ac-46cd-a418-ad889f0c4081] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:32:09.539198  377634 system_pods.go:89] "csi-hostpath-attacher-0" [ad3bde9a-0070-4465-afac-ee4c7efa4317] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:32:09.539204  377634 system_pods.go:89] "csi-hostpath-resizer-0" [a94e96e8-fca6-4a06-8765-13d2ef93a8da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:32:09.539209  377634 system_pods.go:89] "csi-hostpathplugin-rsn9j" [f690cddd-5270-404d-bd20-3beaf2f88e57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:32:09.539214  377634 system_pods.go:89] "etcd-addons-007439" [2632aa09-8415-49c7-a35f-956cd4d45c1a] Running
	I1207 22:32:09.539218  377634 system_pods.go:89] "kindnet-8t6g6" [2803a223-e9a2-4daa-9b70-2a094a8bfcf9] Running
	I1207 22:32:09.539223  377634 system_pods.go:89] "kube-apiserver-addons-007439" [33792473-a7d5-4e88-8255-f807cad0f40c] Running
	I1207 22:32:09.539226  377634 system_pods.go:89] "kube-controller-manager-addons-007439" [09d14a1d-243d-474b-a662-3178f0387ad2] Running
	I1207 22:32:09.539232  377634 system_pods.go:89] "kube-ingress-dns-minikube" [b977131b-fcf0-4612-af35-8b9e761cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:32:09.539238  377634 system_pods.go:89] "kube-proxy-724nh" [d18f83a2-2b43-4c5b-b426-35611913b0fe] Running
	I1207 22:32:09.539242  377634 system_pods.go:89] "kube-scheduler-addons-007439" [ff7ef5b1-4778-4a9e-8e65-25eb520a768d] Running
	I1207 22:32:09.539250  377634 system_pods.go:89] "metrics-server-85b7d694d7-xclmv" [127d9996-068b-4d23-8215-895807a75987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:32:09.539259  377634 system_pods.go:89] "nvidia-device-plugin-daemonset-ntq2v" [0b9135a9-fc92-4f62-9720-da68d90dcd42] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:32:09.539263  377634 system_pods.go:89] "registry-6b586f9694-6p6bs" [0a78b775-c4c2-437d-8f4d-28a45858c31d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:32:09.539269  377634 system_pods.go:89] "registry-creds-764b6fb674-khbsn" [2604cd75-b452-415c-a35a-1f782ae157c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:32:09.539275  377634 system_pods.go:89] "registry-proxy-7n2kv" [03a65721-16eb-45cd-9dda-3c1a53f037f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:32:09.539280  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86qjb" [0ed800fc-182e-493f-aacb-7fb2337b1f90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:09.539288  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzrfv" [81da8183-509c-4ce8-89dd-c5890c19d49c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:09.539293  377634 system_pods.go:89] "storage-provisioner" [62bc9461-6c42-46bb-829b-86b3cbd143b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:32:09.539308  377634 retry.go:31] will retry after 332.262547ms: missing components: kube-dns
	I1207 22:32:09.604857  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:09.637744  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:09.834471  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:09.847800  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:09.874557  377634 system_pods.go:86] 20 kube-system pods found
	I1207 22:32:09.874588  377634 system_pods.go:89] "amd-gpu-device-plugin-vxzwd" [a0965c42-1d83-4d06-a17b-c97d11433f73] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:32:09.874595  377634 system_pods.go:89] "coredns-66bc5c9577-qv27l" [b54c5507-48ac-46cd-a418-ad889f0c4081] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:32:09.874602  377634 system_pods.go:89] "csi-hostpath-attacher-0" [ad3bde9a-0070-4465-afac-ee4c7efa4317] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:32:09.874608  377634 system_pods.go:89] "csi-hostpath-resizer-0" [a94e96e8-fca6-4a06-8765-13d2ef93a8da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:32:09.874613  377634 system_pods.go:89] "csi-hostpathplugin-rsn9j" [f690cddd-5270-404d-bd20-3beaf2f88e57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:32:09.874618  377634 system_pods.go:89] "etcd-addons-007439" [2632aa09-8415-49c7-a35f-956cd4d45c1a] Running
	I1207 22:32:09.874623  377634 system_pods.go:89] "kindnet-8t6g6" [2803a223-e9a2-4daa-9b70-2a094a8bfcf9] Running
	I1207 22:32:09.874629  377634 system_pods.go:89] "kube-apiserver-addons-007439" [33792473-a7d5-4e88-8255-f807cad0f40c] Running
	I1207 22:32:09.874634  377634 system_pods.go:89] "kube-controller-manager-addons-007439" [09d14a1d-243d-474b-a662-3178f0387ad2] Running
	I1207 22:32:09.874641  377634 system_pods.go:89] "kube-ingress-dns-minikube" [b977131b-fcf0-4612-af35-8b9e761cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:32:09.874645  377634 system_pods.go:89] "kube-proxy-724nh" [d18f83a2-2b43-4c5b-b426-35611913b0fe] Running
	I1207 22:32:09.874649  377634 system_pods.go:89] "kube-scheduler-addons-007439" [ff7ef5b1-4778-4a9e-8e65-25eb520a768d] Running
	I1207 22:32:09.874657  377634 system_pods.go:89] "metrics-server-85b7d694d7-xclmv" [127d9996-068b-4d23-8215-895807a75987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:32:09.874663  377634 system_pods.go:89] "nvidia-device-plugin-daemonset-ntq2v" [0b9135a9-fc92-4f62-9720-da68d90dcd42] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:32:09.874671  377634 system_pods.go:89] "registry-6b586f9694-6p6bs" [0a78b775-c4c2-437d-8f4d-28a45858c31d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:32:09.874676  377634 system_pods.go:89] "registry-creds-764b6fb674-khbsn" [2604cd75-b452-415c-a35a-1f782ae157c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:32:09.874681  377634 system_pods.go:89] "registry-proxy-7n2kv" [03a65721-16eb-45cd-9dda-3c1a53f037f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:32:09.874686  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86qjb" [0ed800fc-182e-493f-aacb-7fb2337b1f90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:09.874722  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzrfv" [81da8183-509c-4ce8-89dd-c5890c19d49c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:09.874730  377634 system_pods.go:89] "storage-provisioner" [62bc9461-6c42-46bb-829b-86b3cbd143b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:32:09.874745  377634 retry.go:31] will retry after 426.570645ms: missing components: kube-dns
	I1207 22:32:10.104933  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:10.137912  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:10.305510  377634 system_pods.go:86] 20 kube-system pods found
	I1207 22:32:10.305542  377634 system_pods.go:89] "amd-gpu-device-plugin-vxzwd" [a0965c42-1d83-4d06-a17b-c97d11433f73] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:32:10.305551  377634 system_pods.go:89] "coredns-66bc5c9577-qv27l" [b54c5507-48ac-46cd-a418-ad889f0c4081] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:32:10.305558  377634 system_pods.go:89] "csi-hostpath-attacher-0" [ad3bde9a-0070-4465-afac-ee4c7efa4317] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:32:10.305563  377634 system_pods.go:89] "csi-hostpath-resizer-0" [a94e96e8-fca6-4a06-8765-13d2ef93a8da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:32:10.305585  377634 system_pods.go:89] "csi-hostpathplugin-rsn9j" [f690cddd-5270-404d-bd20-3beaf2f88e57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:32:10.305595  377634 system_pods.go:89] "etcd-addons-007439" [2632aa09-8415-49c7-a35f-956cd4d45c1a] Running
	I1207 22:32:10.305599  377634 system_pods.go:89] "kindnet-8t6g6" [2803a223-e9a2-4daa-9b70-2a094a8bfcf9] Running
	I1207 22:32:10.305603  377634 system_pods.go:89] "kube-apiserver-addons-007439" [33792473-a7d5-4e88-8255-f807cad0f40c] Running
	I1207 22:32:10.305610  377634 system_pods.go:89] "kube-controller-manager-addons-007439" [09d14a1d-243d-474b-a662-3178f0387ad2] Running
	I1207 22:32:10.305616  377634 system_pods.go:89] "kube-ingress-dns-minikube" [b977131b-fcf0-4612-af35-8b9e761cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:32:10.305619  377634 system_pods.go:89] "kube-proxy-724nh" [d18f83a2-2b43-4c5b-b426-35611913b0fe] Running
	I1207 22:32:10.305623  377634 system_pods.go:89] "kube-scheduler-addons-007439" [ff7ef5b1-4778-4a9e-8e65-25eb520a768d] Running
	I1207 22:32:10.305631  377634 system_pods.go:89] "metrics-server-85b7d694d7-xclmv" [127d9996-068b-4d23-8215-895807a75987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:32:10.305636  377634 system_pods.go:89] "nvidia-device-plugin-daemonset-ntq2v" [0b9135a9-fc92-4f62-9720-da68d90dcd42] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:32:10.305641  377634 system_pods.go:89] "registry-6b586f9694-6p6bs" [0a78b775-c4c2-437d-8f4d-28a45858c31d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:32:10.305647  377634 system_pods.go:89] "registry-creds-764b6fb674-khbsn" [2604cd75-b452-415c-a35a-1f782ae157c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:32:10.305653  377634 system_pods.go:89] "registry-proxy-7n2kv" [03a65721-16eb-45cd-9dda-3c1a53f037f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:32:10.305664  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86qjb" [0ed800fc-182e-493f-aacb-7fb2337b1f90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:10.305670  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzrfv" [81da8183-509c-4ce8-89dd-c5890c19d49c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:10.305681  377634 system_pods.go:89] "storage-provisioner" [62bc9461-6c42-46bb-829b-86b3cbd143b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:32:10.305709  377634 retry.go:31] will retry after 579.48594ms: missing components: kube-dns
	I1207 22:32:10.334643  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:10.354188  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:10.605773  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:10.638018  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:10.835343  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:10.848131  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:10.889933  377634 system_pods.go:86] 20 kube-system pods found
	I1207 22:32:10.889966  377634 system_pods.go:89] "amd-gpu-device-plugin-vxzwd" [a0965c42-1d83-4d06-a17b-c97d11433f73] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:32:10.889974  377634 system_pods.go:89] "coredns-66bc5c9577-qv27l" [b54c5507-48ac-46cd-a418-ad889f0c4081] Running
	I1207 22:32:10.889986  377634 system_pods.go:89] "csi-hostpath-attacher-0" [ad3bde9a-0070-4465-afac-ee4c7efa4317] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:32:10.889995  377634 system_pods.go:89] "csi-hostpath-resizer-0" [a94e96e8-fca6-4a06-8765-13d2ef93a8da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:32:10.890003  377634 system_pods.go:89] "csi-hostpathplugin-rsn9j" [f690cddd-5270-404d-bd20-3beaf2f88e57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:32:10.890013  377634 system_pods.go:89] "etcd-addons-007439" [2632aa09-8415-49c7-a35f-956cd4d45c1a] Running
	I1207 22:32:10.890019  377634 system_pods.go:89] "kindnet-8t6g6" [2803a223-e9a2-4daa-9b70-2a094a8bfcf9] Running
	I1207 22:32:10.890029  377634 system_pods.go:89] "kube-apiserver-addons-007439" [33792473-a7d5-4e88-8255-f807cad0f40c] Running
	I1207 22:32:10.890035  377634 system_pods.go:89] "kube-controller-manager-addons-007439" [09d14a1d-243d-474b-a662-3178f0387ad2] Running
	I1207 22:32:10.890048  377634 system_pods.go:89] "kube-ingress-dns-minikube" [b977131b-fcf0-4612-af35-8b9e761cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:32:10.890053  377634 system_pods.go:89] "kube-proxy-724nh" [d18f83a2-2b43-4c5b-b426-35611913b0fe] Running
	I1207 22:32:10.890061  377634 system_pods.go:89] "kube-scheduler-addons-007439" [ff7ef5b1-4778-4a9e-8e65-25eb520a768d] Running
	I1207 22:32:10.890070  377634 system_pods.go:89] "metrics-server-85b7d694d7-xclmv" [127d9996-068b-4d23-8215-895807a75987] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:32:10.890080  377634 system_pods.go:89] "nvidia-device-plugin-daemonset-ntq2v" [0b9135a9-fc92-4f62-9720-da68d90dcd42] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:32:10.890092  377634 system_pods.go:89] "registry-6b586f9694-6p6bs" [0a78b775-c4c2-437d-8f4d-28a45858c31d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:32:10.890102  377634 system_pods.go:89] "registry-creds-764b6fb674-khbsn" [2604cd75-b452-415c-a35a-1f782ae157c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:32:10.890113  377634 system_pods.go:89] "registry-proxy-7n2kv" [03a65721-16eb-45cd-9dda-3c1a53f037f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:32:10.890121  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86qjb" [0ed800fc-182e-493f-aacb-7fb2337b1f90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:10.890133  377634 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzrfv" [81da8183-509c-4ce8-89dd-c5890c19d49c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:32:10.890139  377634 system_pods.go:89] "storage-provisioner" [62bc9461-6c42-46bb-829b-86b3cbd143b2] Running
	I1207 22:32:10.890153  377634 system_pods.go:126] duration metric: took 1.915754378s to wait for k8s-apps to be running ...
	I1207 22:32:10.890164  377634 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 22:32:10.890219  377634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 22:32:10.906540  377634 system_svc.go:56] duration metric: took 16.366971ms WaitForService to wait for kubelet
	I1207 22:32:10.906572  377634 kubeadm.go:587] duration metric: took 15.04691413s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 22:32:10.906604  377634 node_conditions.go:102] verifying NodePressure condition ...
	I1207 22:32:10.909678  377634 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 22:32:10.909724  377634 node_conditions.go:123] node cpu capacity is 8
	I1207 22:32:10.909744  377634 node_conditions.go:105] duration metric: took 3.134337ms to run NodePressure ...
	I1207 22:32:10.909760  377634 start.go:242] waiting for startup goroutines ...
	I1207 22:32:11.104911  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:11.138422  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:11.334413  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:11.348242  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:11.605150  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:11.638138  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:11.835620  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:11.848219  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:12.207206  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:12.207304  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:12.334989  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:12.351263  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:12.605494  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:12.638752  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:12.834821  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:12.935411  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:13.104955  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:13.138457  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:13.334045  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:13.347712  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:13.686513  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:13.686585  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:13.834664  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:13.848421  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:14.104990  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:14.143039  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:14.335004  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:14.347405  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:14.605743  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:14.706289  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:14.834854  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:14.846912  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:15.104600  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:15.138517  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:15.334133  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:15.347520  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:15.604910  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:15.638165  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:15.834907  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:15.847728  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:16.104782  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:16.137819  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:16.334412  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:16.347552  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:16.605063  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:16.638179  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:16.834799  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:16.848099  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:17.104869  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:17.138465  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:17.334357  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:17.348035  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:17.605579  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:17.639052  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:17.835125  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:17.847679  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:18.104372  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:18.138681  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:18.334670  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:18.348274  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:18.605466  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:18.638821  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:18.835018  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:18.847719  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:19.104506  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:19.138933  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:19.334728  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:19.348285  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:19.605597  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:19.637585  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:19.834517  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:19.848121  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:20.105267  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:20.138341  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:20.333981  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:20.347558  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:20.605240  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:20.638429  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:20.834400  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:20.848061  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:21.104969  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:21.138290  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:21.335222  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:21.348026  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:21.605182  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:21.705680  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:21.834775  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:21.848459  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:22.105829  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:22.137835  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:22.334321  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:22.347720  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:22.604431  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:22.638791  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:22.834496  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:22.847801  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:23.104618  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:23.139084  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:23.335316  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:23.348100  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:23.604420  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:23.641685  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:23.834195  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:23.848319  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:24.104869  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:24.138335  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:24.376588  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:24.376588  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:24.611988  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:24.638784  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:24.834270  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:24.847811  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:25.104511  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:25.138951  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:25.334711  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:25.348560  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:25.604539  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:25.638444  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:25.833886  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:25.847136  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:26.105172  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:26.138554  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:26.334116  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:26.347735  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:26.604304  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:26.638469  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:26.833872  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:26.847202  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:27.105095  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:27.138181  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:27.334918  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:27.347453  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:27.605503  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:27.639257  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:27.835393  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:27.848161  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:28.105234  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:28.138189  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:28.334409  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:28.348025  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:28.605045  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:28.638495  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:28.834500  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:28.847989  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:29.104978  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:29.138132  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:29.334524  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:29.348120  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:29.605256  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:29.638259  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:29.833943  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:29.847203  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:30.105674  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:30.138188  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:30.335315  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:30.347465  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:30.605765  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:30.638181  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:30.835279  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:30.847367  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:31.105066  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:31.138006  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:31.334639  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:31.348255  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:31.606744  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:31.707203  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:31.835137  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:31.847547  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:32.104291  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:32.138363  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:32.333673  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:32.347981  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:32.604665  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:32.637822  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:32.834587  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:32.848059  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:33.104897  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:33.137986  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:33.334726  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:33.348161  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:33.605896  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:33.707355  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:33.834376  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:33.847713  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:34.104872  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:34.138548  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:34.334392  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:34.347734  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:34.617284  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:34.641765  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:34.834059  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:34.847418  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:35.104948  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:35.138339  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:35.334159  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:35.347775  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:35.604463  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:35.704951  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:35.834251  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:35.847899  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:36.104745  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:36.137687  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:36.334472  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:36.348025  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:36.605335  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:36.639444  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:36.835541  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:36.935810  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:37.104098  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:37.137960  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:37.334806  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:37.347343  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:37.605444  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:37.686742  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:37.834649  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:37.848212  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:38.105060  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:38.138286  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:38.334485  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:38.348290  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:38.605560  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:38.681296  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:38.833941  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:38.847211  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:39.104988  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:39.138217  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:39.334785  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:39.347047  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:39.604872  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:39.638068  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:39.834681  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:39.847860  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:40.104799  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:40.137959  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:40.334828  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:40.346954  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:40.604781  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:40.638089  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:40.834954  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:40.847467  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:41.105896  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:41.138643  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:41.334302  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:41.348161  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:41.605223  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:41.639203  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:41.834979  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:41.847725  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:42.105295  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:42.206084  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:42.334497  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:42.347813  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:42.655138  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:42.655138  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:42.834478  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:42.847891  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:43.104710  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:43.137780  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:43.334279  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:43.347733  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:43.604718  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:43.638077  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:43.835160  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:43.847428  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:44.105516  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:44.138837  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:44.334338  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:44.347679  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:44.605390  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:44.638634  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:44.834216  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:44.847924  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:45.105186  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:45.205235  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:45.334814  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:32:45.347845  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:45.647109  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:45.647156  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:45.835053  377634 kapi.go:107] duration metric: took 48.003727772s to wait for kubernetes.io/minikube-addons=registry ...
	I1207 22:32:45.847133  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:46.105297  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:46.138584  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:46.347997  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:46.605087  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:46.638190  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:46.848565  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:47.104823  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:47.138233  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:47.348486  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:47.604416  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:47.705149  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:47.848098  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:48.105652  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:48.138017  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:48.348205  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:48.604639  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:48.637686  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:48.848128  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:49.104946  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:49.138422  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:49.348907  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:49.604924  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:49.638486  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:49.849089  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:50.105263  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:50.138751  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:50.348212  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:50.604836  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:50.637755  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:50.847880  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:51.111472  377634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:32:51.138593  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:51.348513  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:51.605082  377634 kapi.go:107] duration metric: took 54.003598032s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1207 22:32:51.638597  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:51.848094  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:52.151201  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:52.348522  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:52.680794  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:52.928244  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:53.138820  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:53.347372  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:53.639403  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:53.850421  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:54.138666  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:54.347558  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:54.638591  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:54.847800  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:55.138383  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:55.348394  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:32:55.639002  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:55.848630  377634 kapi.go:107] duration metric: took 57.503979448s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1207 22:32:56.138538  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:56.637933  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:57.137939  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:57.638818  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:58.138824  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:58.638943  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:59.138925  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:32:59.638398  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:00.138445  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:00.638311  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:01.138672  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:01.638823  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:02.139464  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:02.639133  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:03.138367  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:03.701209  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:04.139061  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:04.638295  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:05.138216  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:05.638497  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:06.138823  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:06.639021  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:07.138428  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:07.638413  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:08.138802  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:08.638667  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:09.138899  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:09.638212  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:10.138617  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:10.638874  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:11.138328  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:11.638414  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:12.138926  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:12.638982  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:13.138559  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:13.638433  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:14.138780  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:14.639254  377634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:33:15.138362  377634 kapi.go:107] duration metric: took 1m11.002943391s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1207 22:33:15.139826  377634 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-007439 cluster.
	I1207 22:33:15.141075  377634 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1207 22:33:15.142206  377634 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1207 22:33:15.143317  377634 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner-rancher, inspektor-gadget, yakd, volcano, metrics-server, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1207 22:33:15.144373  377634 addons.go:530] duration metric: took 1m19.284711217s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner-rancher inspektor-gadget yakd volcano metrics-server default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1207 22:33:15.144415  377634 start.go:247] waiting for cluster config update ...
	I1207 22:33:15.144443  377634 start.go:256] writing updated cluster config ...
	I1207 22:33:15.144758  377634 ssh_runner.go:195] Run: rm -f paused
	I1207 22:33:15.148918  377634 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 22:33:15.151546  377634 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qv27l" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.155282  377634 pod_ready.go:94] pod "coredns-66bc5c9577-qv27l" is "Ready"
	I1207 22:33:15.155302  377634 pod_ready.go:86] duration metric: took 3.734603ms for pod "coredns-66bc5c9577-qv27l" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.157023  377634 pod_ready.go:83] waiting for pod "etcd-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.160326  377634 pod_ready.go:94] pod "etcd-addons-007439" is "Ready"
	I1207 22:33:15.160345  377634 pod_ready.go:86] duration metric: took 3.301966ms for pod "etcd-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.162089  377634 pod_ready.go:83] waiting for pod "kube-apiserver-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.165157  377634 pod_ready.go:94] pod "kube-apiserver-addons-007439" is "Ready"
	I1207 22:33:15.165177  377634 pod_ready.go:86] duration metric: took 3.071138ms for pod "kube-apiserver-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.166767  377634 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.551957  377634 pod_ready.go:94] pod "kube-controller-manager-addons-007439" is "Ready"
	I1207 22:33:15.551982  377634 pod_ready.go:86] duration metric: took 385.197709ms for pod "kube-controller-manager-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:15.752958  377634 pod_ready.go:83] waiting for pod "kube-proxy-724nh" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:16.152105  377634 pod_ready.go:94] pod "kube-proxy-724nh" is "Ready"
	I1207 22:33:16.152131  377634 pod_ready.go:86] duration metric: took 399.134038ms for pod "kube-proxy-724nh" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:16.353056  377634 pod_ready.go:83] waiting for pod "kube-scheduler-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:16.752535  377634 pod_ready.go:94] pod "kube-scheduler-addons-007439" is "Ready"
	I1207 22:33:16.752566  377634 pod_ready.go:86] duration metric: took 399.485526ms for pod "kube-scheduler-addons-007439" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:33:16.752588  377634 pod_ready.go:40] duration metric: took 1.603635875s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 22:33:16.797666  377634 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 22:33:16.799542  377634 out.go:179] * Done! kubectl is now configured to use "addons-007439" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	c4e503e069f86       56cc512116c8f       5 minutes ago       Running             busybox                   0                   7fc15941c9d4c       busybox                                    default
	49d2659e5d005       97fe896f8c07b       6 minutes ago       Running             controller                0                   4c70392eaf980       ingress-nginx-controller-6c8bf45fb-j9c9h   ingress-nginx
	31e4b94276ca9       e16d1e3a10667       6 minutes ago       Running             local-path-provisioner    0                   89e5305c1e77c       local-path-provisioner-648f6765c9-47p5s    local-path-storage
	64e04f3b48586       884bd0ac01c8f       6 minutes ago       Exited              patch                     0                   2494dcf80b409       ingress-nginx-admission-patch-ktwnj        ingress-nginx
	a875b450d5fed       884bd0ac01c8f       6 minutes ago       Exited              create                    0                   6d29205806d41       ingress-nginx-admission-create-mx7h5       ingress-nginx
	e4a3f9148c61d       b6ab53fbfedaa       6 minutes ago       Running             minikube-ingress-dns      0                   57857a49054ef       kube-ingress-dns-minikube                  kube-system
	8c0d12e2b6b9f       6e38f40d628db       7 minutes ago       Running             storage-provisioner       0                   1866e052153ea       storage-provisioner                        kube-system
	75238e6b95b61       52546a367cc9e       7 minutes ago       Running             coredns                   0                   daccf049c651e       coredns-66bc5c9577-qv27l                   kube-system
	4022c9fb1481a       409467f978b4a       7 minutes ago       Running             kindnet-cni               0                   8c40702c56ffb       kindnet-8t6g6                              kube-system
	7ec22013ddab0       8aa150647e88a       7 minutes ago       Running             kube-proxy                0                   50b8b1f1714e5       kube-proxy-724nh                           kube-system
	07c7fe1323cf0       a5f569d49a979       7 minutes ago       Running             kube-apiserver            0                   5178bcc811637       kube-apiserver-addons-007439               kube-system
	998061cc2a197       88320b5498ff2       7 minutes ago       Running             kube-scheduler            0                   66accb717e258       kube-scheduler-addons-007439               kube-system
	a0a3f7747e6dd       01e8bacf0f500       7 minutes ago       Running             kube-controller-manager   0                   66a03eb47796d       kube-controller-manager-addons-007439      kube-system
	2d0f96ec7aaa0       a3e246e9556e9       7 minutes ago       Running             etcd                      0                   106ef906bff2c       etcd-addons-007439                         kube-system
	
	
	==> containerd <==
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.648375068Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod2803a223_e9a2_4daa_9b70_2a094a8bfcf9.slice/cri-containerd-4022c9fb1481ae7b0a871bde371a54bebba3d28dfbb30934d428b3720ab39fe2.scope/hugetlb.1GB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.649108326Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e84b285003d418b18a2ad2e58cc0f60.slice/cri-containerd-a0a3f7747e6dde69a78a18a0ec96bc7cfe02e3f2db746ef7631aac0314553e12.scope/hugetlb.2MB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.649199519Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e84b285003d418b18a2ad2e58cc0f60.slice/cri-containerd-a0a3f7747e6dde69a78a18a0ec96bc7cfe02e3f2db746ef7631aac0314553e12.scope/hugetlb.1GB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.650042006Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14c4d222_e85c_479a_8f98_ba8ed4489954.slice/cri-containerd-31e4b94276ca916f63b766b5f44c5faaa6fa6b939992955391b7cbdf0b887540.scope/hugetlb.2MB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.650173016Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14c4d222_e85c_479a_8f98_ba8ed4489954.slice/cri-containerd-31e4b94276ca916f63b766b5f44c5faaa6fa6b939992955391b7cbdf0b887540.scope/hugetlb.1GB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.650841424Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23475c152526c71f2fc12d9ccd9d08da.slice/cri-containerd-998061cc2a197b456e37469c981d6d7e6eccb94dcca2b9ce4bd676c5e41be358.scope/hugetlb.2MB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.650923744Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23475c152526c71f2fc12d9ccd9d08da.slice/cri-containerd-998061cc2a197b456e37469c981d6d7e6eccb94dcca2b9ce4bd676c5e41be358.scope/hugetlb.1GB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.651558863Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd18f83a2_2b43_4c5b_b426_35611913b0fe.slice/cri-containerd-7ec22013ddab02873bd1a7721d350a91368b4489a4fd97bffed120183b9d6f5d.scope/hugetlb.2MB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.651659195Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd18f83a2_2b43_4c5b_b426_35611913b0fe.slice/cri-containerd-7ec22013ddab02873bd1a7721d350a91368b4489a4fd97bffed120183b9d6f5d.scope/hugetlb.1GB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.652314261Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62bc9461_6c42_46bb_829b_86b3cbd143b2.slice/cri-containerd-8c0d12e2b6b9f8c7a71e471624fccf0060cb79ad1cf1e51555cfaeb9771ef31b.scope/hugetlb.2MB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.652391386Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62bc9461_6c42_46bb_829b_86b3cbd143b2.slice/cri-containerd-8c0d12e2b6b9f8c7a71e471624fccf0060cb79ad1cf1e51555cfaeb9771ef31b.scope/hugetlb.1GB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.653209449Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb977131b_fcf0_4612_af35_8b9e761cbebc.slice/cri-containerd-e4a3f9148c61df7ea7cb2808218f1316d162b66846c55ef9b8af7e33c9cbefab.scope/hugetlb.2MB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.653332846Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb977131b_fcf0_4612_af35_8b9e761cbebc.slice/cri-containerd-e4a3f9148c61df7ea7cb2808218f1316d162b66846c55ef9b8af7e33c9cbefab.scope/hugetlb.1GB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.654151510Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb54c5507_48ac_46cd_a418_ad889f0c4081.slice/cri-containerd-75238e6b95b61458a2b61f6a22ba8094e873f25765c0904f4697f54b5321cd61.scope/hugetlb.2MB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.654276487Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb54c5507_48ac_46cd_a418_ad889f0c4081.slice/cri-containerd-75238e6b95b61458a2b61f6a22ba8094e873f25765c0904f4697f54b5321cd61.scope/hugetlb.1GB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.655030452Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60f2c93c3bef251b22011ee6a4bd35d9.slice/cri-containerd-2d0f96ec7aaa01a68daa93e9ce562fd5c87edd0e56b4f37f2768785d23db1871.scope/hugetlb.2MB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.655136991Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60f2c93c3bef251b22011ee6a4bd35d9.slice/cri-containerd-2d0f96ec7aaa01a68daa93e9ce562fd5c87edd0e56b4f37f2768785d23db1871.scope/hugetlb.1GB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.656046857Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeca69fa_83a8_43b2_9cc1_ceabbe875006.slice/cri-containerd-49d2659e5d0050668b505655d28976e9b38bc570cc77d174b4f8f6c1444f5537.scope/hugetlb.2MB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.656154447Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeca69fa_83a8_43b2_9cc1_ceabbe875006.slice/cri-containerd-49d2659e5d0050668b505655d28976e9b38bc570cc77d174b4f8f6c1444f5537.scope/hugetlb.1GB.events\""
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.957864844Z" level=info msg="container event discarded" container=c0b8a8602dba5474bedb73db2bd36714f5799566d619844ab1b030be5491741d type=CONTAINER_CREATED_EVENT
	Dec 07 22:39:12 addons-007439 containerd[659]: time="2025-12-07T22:39:12.957920747Z" level=info msg="container event discarded" container=c0b8a8602dba5474bedb73db2bd36714f5799566d619844ab1b030be5491741d type=CONTAINER_STARTED_EVENT
	Dec 07 22:39:13 addons-007439 containerd[659]: time="2025-12-07T22:39:13.122449437Z" level=error msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:39:13 addons-007439 containerd[659]: time="2025-12-07T22:39:13.122471347Z" level=info msg="stop pulling image docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: active requests=0, bytes read=10979"
	Dec 07 22:39:13 addons-007439 containerd[659]: time="2025-12-07T22:39:13.430745558Z" level=info msg="container event discarded" container=2c044bd0c8fc69aa0e894980bb6242d638512964da798172ec7eeb6c6af0f3e3 type=CONTAINER_CREATED_EVENT
	Dec 07 22:39:13 addons-007439 containerd[659]: time="2025-12-07T22:39:13.430826184Z" level=info msg="container event discarded" container=2c044bd0c8fc69aa0e894980bb6242d638512964da798172ec7eeb6c6af0f3e3 type=CONTAINER_STARTED_EVENT
	
	
	==> coredns [75238e6b95b61458a2b61f6a22ba8094e873f25765c0904f4697f54b5321cd61] <==
	[INFO] 10.244.0.22:36522 - 2164 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000176689s
	[INFO] 10.244.0.22:35062 - 24180 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.0000911s
	[INFO] 10.244.0.22:35062 - 23904 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000125392s
	[INFO] 10.244.0.22:51383 - 52329 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000073453s
	[INFO] 10.244.0.22:51383 - 52564 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000123082s
	[INFO] 10.244.0.22:56036 - 15109 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000066545s
	[INFO] 10.244.0.22:56036 - 14939 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00010264s
	[INFO] 10.244.0.22:53993 - 64779 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009872s
	[INFO] 10.244.0.22:53993 - 65251 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000144223s
	[INFO] 10.244.0.26:54923 - 59053 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000234195s
	[INFO] 10.244.0.26:46003 - 1011 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000329162s
	[INFO] 10.244.0.26:47520 - 46082 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155183s
	[INFO] 10.244.0.26:37977 - 4920 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000239923s
	[INFO] 10.244.0.26:40242 - 4515 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134789s
	[INFO] 10.244.0.26:50882 - 20631 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000146945s
	[INFO] 10.244.0.26:47109 - 25839 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.007579691s
	[INFO] 10.244.0.26:35748 - 58037 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.010934137s
	[INFO] 10.244.0.26:59910 - 10885 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007617469s
	[INFO] 10.244.0.26:52765 - 51850 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007947021s
	[INFO] 10.244.0.26:59171 - 62450 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00512494s
	[INFO] 10.244.0.26:58580 - 6517 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005411444s
	[INFO] 10.244.0.26:56394 - 28475 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000818018s
	[INFO] 10.244.0.26:47777 - 62109 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001117278s
	[INFO] 10.244.0.31:33017 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00021271s
	[INFO] 10.244.0.31:60903 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000186967s
	
	
	==> describe nodes <==
	Name:               addons-007439
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-007439
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=addons-007439
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T22_31_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-007439
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 22:31:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-007439
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 22:39:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 22:34:55 +0000   Sun, 07 Dec 2025 22:31:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 22:34:55 +0000   Sun, 07 Dec 2025 22:31:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 22:34:55 +0000   Sun, 07 Dec 2025 22:31:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 22:34:55 +0000   Sun, 07 Dec 2025 22:32:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-007439
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                9cbd7bd8-6368-4564-9823-8547ed273796
	  Boot ID:                    60b18ead-4d86-44c7-8f9b-425eebb82278
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-j9c9h                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         7m16s
	  kube-system                 coredns-66bc5c9577-qv27l                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m16s
	  kube-system                 etcd-addons-007439                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m22s
	  kube-system                 kindnet-8t6g6                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m17s
	  kube-system                 kube-apiserver-addons-007439                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 kube-controller-manager-addons-007439                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  kube-system                 kube-proxy-724nh                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  kube-system                 kube-scheduler-addons-007439                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  local-path-storage          helper-pod-create-pvc-54a4bf63-34e9-4e7e-89fa-62d351df0672    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  local-path-storage          local-path-provisioner-648f6765c9-47p5s                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m15s  kube-proxy       
	  Normal  Starting                 7m22s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m22s  kubelet          Node addons-007439 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m22s  kubelet          Node addons-007439 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m22s  kubelet          Node addons-007439 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m18s  node-controller  Node addons-007439 event: Registered Node addons-007439 in Controller
	  Normal  NodeReady                7m5s   kubelet          Node addons-007439 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 21:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 9d 47 68 73 f8 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 51 85 40 99 2c 08 06
	[Dec 7 22:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +0.003775] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 fe e4 4c 66 38 08 06
	[ +24.208165] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	[Dec 7 22:26] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 26 10 e0 03 10 08 06
	[  +0.000351] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +6.093825] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 95 3b ef b3 e0 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e ac 25 ad 6e 00 08 06
	[ +21.488023] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e e7 41 9a 21 38 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	
	
	==> etcd [2d0f96ec7aaa01a68daa93e9ce562fd5c87edd0e56b4f37f2768785d23db1871] <==
	{"level":"warn","ts":"2025-12-07T22:31:48.463058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:31:58.418206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:31:58.426060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:12.205493Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.645463ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-07T22:32:12.205606Z","caller":"traceutil/trace.go:172","msg":"trace[1470452276] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1067; }","duration":"101.772953ms","start":"2025-12-07T22:32:12.103816Z","end":"2025-12-07T22:32:12.205589Z","steps":["trace[1470452276] 'range keys from in-memory index tree'  (duration: 101.569673ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-07T22:32:23.585959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:23.613744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:23.657598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:23.666262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:23.676407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:23.684744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:23.730367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:25.848312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:25.854650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:25.867091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:25.876963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:32:25.883364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57722","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T22:32:34.615525Z","caller":"traceutil/trace.go:172","msg":"trace[1560222383] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"167.825336ms","start":"2025-12-07T22:32:34.447682Z","end":"2025-12-07T22:32:34.615507Z","steps":["trace[1560222383] 'process raft request'  (duration: 167.7316ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-07T22:32:34.785718Z","caller":"traceutil/trace.go:172","msg":"trace[142784423] transaction","detail":"{read_only:false; response_revision:1182; number_of_response:1; }","duration":"140.107217ms","start":"2025-12-07T22:32:34.645567Z","end":"2025-12-07T22:32:34.785675Z","steps":["trace[142784423] 'process raft request'  (duration: 87.025087ms)","trace[142784423] 'compare'  (duration: 52.9808ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-07T22:32:52.926412Z","caller":"traceutil/trace.go:172","msg":"trace[1444399122] transaction","detail":"{read_only:false; response_revision:1348; number_of_response:1; }","duration":"116.189643ms","start":"2025-12-07T22:32:52.810208Z","end":"2025-12-07T22:32:52.926397Z","steps":["trace[1444399122] 'process raft request'  (duration: 116.110534ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-07T22:33:45.055829Z","caller":"traceutil/trace.go:172","msg":"trace[1004819428] linearizableReadLoop","detail":"{readStateIndex:1575; appliedIndex:1575; }","duration":"102.223648ms","start":"2025-12-07T22:33:44.953581Z","end":"2025-12-07T22:33:45.055805Z","steps":["trace[1004819428] 'read index received'  (duration: 102.21564ms)","trace[1004819428] 'applied index is now lower than readState.Index'  (duration: 6.583µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-07T22:33:45.119493Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.88462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/volcano-system/volcano-controller-configmap\" limit:1 ","response":"range_response_count:1 size:696"}
	{"level":"info","ts":"2025-12-07T22:33:45.119568Z","caller":"traceutil/trace.go:172","msg":"trace[444483577] range","detail":"{range_begin:/registry/configmaps/volcano-system/volcano-controller-configmap; range_end:; response_count:1; response_revision:1534; }","duration":"165.973554ms","start":"2025-12-07T22:33:44.953578Z","end":"2025-12-07T22:33:45.119551Z","steps":["trace[444483577] 'agreement among raft nodes before linearized reading'  (duration: 102.325231ms)","trace[444483577] 'range keys from in-memory index tree'  (duration: 63.472647ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-07T22:33:45.119626Z","caller":"traceutil/trace.go:172","msg":"trace[510443830] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1535; }","duration":"177.968009ms","start":"2025-12-07T22:33:44.941644Z","end":"2025-12-07T22:33:45.119612Z","steps":["trace[510443830] 'process raft request'  (duration: 114.240627ms)","trace[510443830] 'compare'  (duration: 63.519906ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-07T22:33:45.119749Z","caller":"traceutil/trace.go:172","msg":"trace[510971700] transaction","detail":"{read_only:false; response_revision:1536; number_of_response:1; }","duration":"138.551277ms","start":"2025-12-07T22:33:44.981181Z","end":"2025-12-07T22:33:45.119732Z","steps":["trace[510971700] 'process raft request'  (duration: 138.362578ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:39:13 up  1:21,  0 user,  load average: 0.27, 0.83, 1.48
	Linux addons-007439 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4022c9fb1481ae7b0a871bde371a54bebba3d28dfbb30934d428b3720ab39fe2] <==
	I1207 22:37:08.424440       1 main.go:301] handling current node
	I1207 22:37:18.424627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:37:18.424657       1 main.go:301] handling current node
	I1207 22:37:28.424619       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:37:28.424649       1 main.go:301] handling current node
	I1207 22:37:38.424784       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:37:38.424816       1 main.go:301] handling current node
	I1207 22:37:48.424680       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:37:48.424746       1 main.go:301] handling current node
	I1207 22:37:58.433830       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:37:58.433862       1 main.go:301] handling current node
	I1207 22:38:08.427666       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:38:08.427712       1 main.go:301] handling current node
	I1207 22:38:18.424814       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:38:18.424848       1 main.go:301] handling current node
	I1207 22:38:28.424517       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:38:28.424553       1 main.go:301] handling current node
	I1207 22:38:38.433101       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:38:38.433134       1 main.go:301] handling current node
	I1207 22:38:48.425110       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:38:48.425139       1 main.go:301] handling current node
	I1207 22:38:58.424236       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:38:58.424273       1 main.go:301] handling current node
	I1207 22:39:08.425078       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:39:08.425107       1 main.go:301] handling current node
	
	
	==> kube-apiserver [07c7fe1323cf055bbbf9b90ad55b35069663427c3e640475467f69b7d9f71828] <==
	W1207 22:33:46.353751       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1207 22:33:46.448809       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1207 22:33:46.666103       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1207 22:33:46.754795       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1207 22:34:03.231581       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41204: use of closed network connection
	E1207 22:34:03.391462       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41230: use of closed network connection
	I1207 22:34:12.984087       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.83.62"}
	I1207 22:34:31.644379       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1207 22:34:31.806141       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.56.197"}
	I1207 22:34:34.111205       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1207 22:35:01.906207       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 22:35:01.906259       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 22:35:01.919214       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 22:35:01.919255       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 22:35:01.922666       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 22:35:01.922723       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 22:35:01.941614       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 22:35:01.941659       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 22:35:01.945894       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 22:35:01.945933       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E1207 22:35:02.850032       1 watch.go:272] "Unhandled Error" err="client disconnected" logger="UnhandledError"
	W1207 22:35:02.920200       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1207 22:35:02.946570       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1207 22:35:03.064422       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1207 22:35:20.432209       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [a0a3f7747e6dde69a78a18a0ec96bc7cfe02e3f2db746ef7631aac0314553e12] <==
	E1207 22:38:27.047141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:38:28.492327       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:38:28.493383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:38:34.296279       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:38:34.297186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:38:35.306270       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:38:35.307167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:38:37.623340       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:38:37.624266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:38:43.233461       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:38:43.234436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:38:44.669229       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:38:44.670141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:38:55.971219       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:38:55.972127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:38:58.778621       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:38:58.779547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:38:58.955264       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:38:58.956052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:39:00.183921       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:39:00.184910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:39:00.669328       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:39:00.670248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1207 22:39:02.584413       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1207 22:39:02.585416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [7ec22013ddab02873bd1a7721d350a91368b4489a4fd97bffed120183b9d6f5d] <==
	I1207 22:31:57.619006       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:31:57.679727       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 22:31:57.781825       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 22:31:57.781884       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:31:57.781994       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:31:57.830496       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:31:57.830682       1 server_linux.go:132] "Using iptables Proxier"
	I1207 22:31:57.837209       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:31:57.838056       1 server.go:527] "Version info" version="v1.34.2"
	I1207 22:31:57.838194       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:31:57.840072       1 config.go:200] "Starting service config controller"
	I1207 22:31:57.841722       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:31:57.840615       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:31:57.842114       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:31:57.841220       1 config.go:309] "Starting node config controller"
	I1207 22:31:57.842284       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:31:57.846848       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:31:57.840603       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:31:57.847676       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:31:57.942507       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 22:31:57.942563       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:31:57.948332       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [998061cc2a197b456e37469c981d6d7e6eccb94dcca2b9ce4bd676c5e41be358] <==
	E1207 22:31:48.839854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 22:31:48.839940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:31:48.840009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:31:48.840037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 22:31:48.840046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:31:48.840117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1207 22:31:48.840427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1207 22:31:48.840758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1207 22:31:48.841266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 22:31:48.841325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1207 22:31:48.841335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1207 22:31:48.841329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 22:31:48.841366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 22:31:48.841372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:31:48.841397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:31:48.841541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:31:48.841541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:31:49.663764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1207 22:31:49.687748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 22:31:49.687906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:31:49.725222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1207 22:31:49.794614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:31:49.876298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:31:49.909380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1207 22:31:50.338448       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 07 22:38:59 addons-007439 kubelet[1406]:  > image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 07 22:38:59 addons-007439 kubelet[1406]: E1207 22:38:59.685078    1406 kuberuntime_image.go:43] "Failed to pull image" err=<
	Dec 07 22:38:59 addons-007439 kubelet[1406]:         failed to pull and unpack image "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests
	Dec 07 22:38:59 addons-007439 kubelet[1406]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 22:38:59 addons-007439 kubelet[1406]:  > image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 07 22:38:59 addons-007439 kubelet[1406]: E1207 22:38:59.685163    1406 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Dec 07 22:38:59 addons-007439 kubelet[1406]:         container helper-pod start failed in pod helper-pod-create-pvc-54a4bf63-34e9-4e7e-89fa-62d351df0672_local-path-storage(1fde5c92-1d1d-4ee3-9529-42cb93913877): ErrImagePull: failed to pull and unpack image "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests
	Dec 07 22:38:59 addons-007439 kubelet[1406]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 22:38:59 addons-007439 kubelet[1406]:  > logger="UnhandledError"
	Dec 07 22:38:59 addons-007439 kubelet[1406]: E1207 22:38:59.685192    1406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-54a4bf63-34e9-4e7e-89fa-62d351df0672" podUID="1fde5c92-1d1d-4ee3-9529-42cb93913877"
	Dec 07 22:38:59 addons-007439 kubelet[1406]: E1207 22:38:59.708309    1406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-54a4bf63-34e9-4e7e-89fa-62d351df0672" podUID="1fde5c92-1d1d-4ee3-9529-42cb93913877"
	Dec 07 22:39:10 addons-007439 kubelet[1406]: E1207 22:39:10.219446    1406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="3c9264fd-2643-4039-b4ae-33eb195a30d1"
	Dec 07 22:39:13 addons-007439 kubelet[1406]: E1207 22:39:13.122857    1406 log.go:32] "PullImage from image service failed" err=<
	Dec 07 22:39:13 addons-007439 kubelet[1406]:         rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests
	Dec 07 22:39:13 addons-007439 kubelet[1406]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 22:39:13 addons-007439 kubelet[1406]:  > image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 07 22:39:13 addons-007439 kubelet[1406]: E1207 22:39:13.122921    1406 kuberuntime_image.go:43] "Failed to pull image" err=<
	Dec 07 22:39:13 addons-007439 kubelet[1406]:         failed to pull and unpack image "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests
	Dec 07 22:39:13 addons-007439 kubelet[1406]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 22:39:13 addons-007439 kubelet[1406]:  > image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 07 22:39:13 addons-007439 kubelet[1406]: E1207 22:39:13.123003    1406 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Dec 07 22:39:13 addons-007439 kubelet[1406]:         container helper-pod start failed in pod helper-pod-create-pvc-54a4bf63-34e9-4e7e-89fa-62d351df0672_local-path-storage(1fde5c92-1d1d-4ee3-9529-42cb93913877): ErrImagePull: failed to pull and unpack image "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests
	Dec 07 22:39:13 addons-007439 kubelet[1406]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 22:39:13 addons-007439 kubelet[1406]:  > logger="UnhandledError"
	Dec 07 22:39:13 addons-007439 kubelet[1406]: E1207 22:39:13.123034    1406 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-54a4bf63-34e9-4e7e-89fa-62d351df0672" podUID="1fde5c92-1d1d-4ee3-9529-42cb93913877"
	
	
	==> storage-provisioner [8c0d12e2b6b9f8c7a71e471624fccf0060cb79ad1cf1e51555cfaeb9771ef31b] <==
	W1207 22:38:48.908433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:38:50.910966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:38:50.914578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:38:52.917181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:38:52.922308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:38:54.925149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:38:54.928726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:38:56.931306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:38:56.934908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:38:58.937634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:38:58.941339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:39:00.944009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:39:00.947758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:39:02.949907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:39:02.954363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:39:04.957064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:39:04.960728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:39:06.963852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:39:06.967751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:39:08.970553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:39:08.974166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:39:10.977059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:39:10.980483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:39:12.983445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:39:12.987352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-007439 -n addons-007439
helpers_test.go:269: (dbg) Run:  kubectl --context addons-007439 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx test-local-path ingress-nginx-admission-create-mx7h5 ingress-nginx-admission-patch-ktwnj helper-pod-create-pvc-54a4bf63-34e9-4e7e-89fa-62d351df0672
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-007439 describe pod nginx test-local-path ingress-nginx-admission-create-mx7h5 ingress-nginx-admission-patch-ktwnj helper-pod-create-pvc-54a4bf63-34e9-4e7e-89fa-62d351df0672
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-007439 describe pod nginx test-local-path ingress-nginx-admission-create-mx7h5 ingress-nginx-admission-patch-ktwnj helper-pod-create-pvc-54a4bf63-34e9-4e7e-89fa-62d351df0672: exit status 1 (72.360958ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-007439/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:34:31 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.33
	IPs:
	  IP:  10.244.0.33
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qg2pq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qg2pq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age    From               Message
	  ----     ------     ----   ----               -------
	  Normal   Scheduled  4m43s  default-scheduler  Successfully assigned default/nginx to addons-007439
	  Warning  Failed     4m24s  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  100s (x5 over 4m42s)  kubelet  Pulling image "docker.io/nginx:alpine"
	  Warning  Failed   99s (x4 over 4m41s)   kubelet  Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   99s (x5 over 4m41s)   kubelet  Error: ErrImagePull
	  Warning  Failed   51s (x15 over 4m40s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4s (x19 over 4m40s)   kubelet  Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c72mg (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-c72mg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-mx7h5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ktwnj" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-54a4bf63-34e9-4e7e-89fa-62d351df0672" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-007439 describe pod nginx test-local-path ingress-nginx-admission-create-mx7h5 ingress-nginx-admission-patch-ktwnj helper-pod-create-pvc-54a4bf63-34e9-4e7e-89fa-62d351df0672: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-007439 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.686697695s)
--- FAIL: TestAddons/parallel/LocalPath (344.79s)

                                                
                                    
x
+
TestDockerEnvContainerd (35.51s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-401533 --driver=docker  --container-runtime=containerd
E1207 22:43:16.821541  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:43:16.827973  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:43:16.839348  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:43:16.860717  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:43:16.902109  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:43:16.983538  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:43:17.145184  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:43:17.466921  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:43:18.108747  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-401533 --driver=docker  --container-runtime=containerd: (18.847430318s)
E1207 22:43:19.390484  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:43:21.953011  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:43:27.074801  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-401533"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX6Ifmm6/agent.405302" SSH_AGENT_PID="405303" DOCKER_HOST=ssh://docker@127.0.0.1:33150 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX6Ifmm6/agent.405302" SSH_AGENT_PID="405303" DOCKER_HOST=ssh://docker@127.0.0.1:33150 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX6Ifmm6/agent.405302" SSH_AGENT_PID="405303" DOCKER_HOST=ssh://docker@127.0.0.1:33150 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": exit status 1 (1.223716811s)

                                                
                                                
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:245: failed to build images, error: exit status 1, output:
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX6Ifmm6/agent.405302" SSH_AGENT_PID="405303" DOCKER_HOST=ssh://docker@127.0.0.1:33150 docker image ls"
docker_test.go:255: failed to detect image 'local/minikube-dockerenv-containerd-test' in output of docker image ls
panic.go:615: *** TestDockerEnvContainerd FAILED at 2025-12-07 22:43:32.063462778 +0000 UTC m=+752.216128866
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestDockerEnvContainerd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect dockerenv-401533
helpers_test.go:243: (dbg) docker inspect dockerenv-401533:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76348f8ea7509ae03cc6641f65ad72cec8fbff8195309c44c1147ec1f92ac303",
	        "Created": "2025-12-07T22:43:04.598771884Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 402797,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T22:43:04.627805498Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/76348f8ea7509ae03cc6641f65ad72cec8fbff8195309c44c1147ec1f92ac303/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76348f8ea7509ae03cc6641f65ad72cec8fbff8195309c44c1147ec1f92ac303/hostname",
	        "HostsPath": "/var/lib/docker/containers/76348f8ea7509ae03cc6641f65ad72cec8fbff8195309c44c1147ec1f92ac303/hosts",
	        "LogPath": "/var/lib/docker/containers/76348f8ea7509ae03cc6641f65ad72cec8fbff8195309c44c1147ec1f92ac303/76348f8ea7509ae03cc6641f65ad72cec8fbff8195309c44c1147ec1f92ac303-json.log",
	        "Name": "/dockerenv-401533",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "dockerenv-401533:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "dockerenv-401533",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "76348f8ea7509ae03cc6641f65ad72cec8fbff8195309c44c1147ec1f92ac303",
	                "LowerDir": "/var/lib/docker/overlay2/0ec0d82fa7bcd8a21cefdd45f46504af61df2e1edcaf13e9912f2119dfb550e9-init/diff:/var/lib/docker/overlay2/3cf0cc8fa8911fedbf601d360c4398569b0f30cd394076654dad88848c4dafaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0ec0d82fa7bcd8a21cefdd45f46504af61df2e1edcaf13e9912f2119dfb550e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0ec0d82fa7bcd8a21cefdd45f46504af61df2e1edcaf13e9912f2119dfb550e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0ec0d82fa7bcd8a21cefdd45f46504af61df2e1edcaf13e9912f2119dfb550e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "dockerenv-401533",
	                "Source": "/var/lib/docker/volumes/dockerenv-401533/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-401533",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-401533",
	                "name.minikube.sigs.k8s.io": "dockerenv-401533",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "456c369f42610f823abd3c0d3f3f910f2c99e4dec739aa7273a72e1835bf4929",
	            "SandboxKey": "/var/run/docker/netns/456c369f4261",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ]
	            },
	            "Networks": {
	                "dockerenv-401533": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49460a57a85892a63a592758194e872d4a49365ecae36a17040c1f4ff98fc352",
	                    "EndpointID": "02bee3f9c8d8096a93ea42ffd8a501154d5b67ef585e2606292c97af4b3701fe",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "76:c6:cf:e4:bf:b9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "dockerenv-401533",
	                        "76348f8ea750"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p dockerenv-401533 -n dockerenv-401533
helpers_test.go:252: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p dockerenv-401533 logs -n 25
helpers_test.go:260: TestDockerEnvContainerd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬─────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                      ARGS                                       │     PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼─────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons     │ addons-007439 addons disable gcp-auth --alsologtostderr -v=1                    │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons     │ enable headlamp -p addons-007439 --alsologtostderr -v=1                         │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons     │ addons-007439 addons disable nvidia-device-plugin --alsologtostderr -v=1        │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons     │ addons-007439 addons disable headlamp --alsologtostderr -v=1                    │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ ip         │ addons-007439 ip                                                                │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons     │ addons-007439 addons disable registry --alsologtostderr -v=1                    │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons     │ addons-007439 addons disable metrics-server --alsologtostderr -v=1              │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons     │ addons-007439 addons disable inspektor-gadget --alsologtostderr -v=1            │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons     │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-007439  │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons     │ addons-007439 addons disable registry-creds --alsologtostderr -v=1              │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons     │ addons-007439 addons disable cloud-spanner --alsologtostderr -v=1               │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons     │ addons-007439 addons disable amd-gpu-device-plugin --alsologtostderr -v=1       │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:34 UTC │
	│ addons     │ addons-007439 addons disable yakd --alsologtostderr -v=1                        │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:34 UTC │ 07 Dec 25 22:35 UTC │
	│ addons     │ addons-007439 addons disable volumesnapshots --alsologtostderr -v=1             │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:35 UTC │ 07 Dec 25 22:35 UTC │
	│ addons     │ addons-007439 addons disable csi-hostpath-driver --alsologtostderr -v=1         │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:35 UTC │ 07 Dec 25 22:35 UTC │
	│ addons     │ addons-007439 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:39 UTC │ 07 Dec 25 22:39 UTC │
	│ addons     │ addons-007439 addons disable ingress-dns --alsologtostderr -v=1                 │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:42 UTC │ 07 Dec 25 22:42 UTC │
	│ addons     │ addons-007439 addons disable ingress --alsologtostderr -v=1                     │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:42 UTC │ 07 Dec 25 22:42 UTC │
	│ stop       │ -p addons-007439                                                                │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:42 UTC │ 07 Dec 25 22:42 UTC │
	│ addons     │ enable dashboard -p addons-007439                                               │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:42 UTC │ 07 Dec 25 22:42 UTC │
	│ addons     │ disable dashboard -p addons-007439                                              │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:42 UTC │ 07 Dec 25 22:42 UTC │
	│ addons     │ disable gvisor -p addons-007439                                                 │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:42 UTC │ 07 Dec 25 22:42 UTC │
	│ delete     │ -p addons-007439                                                                │ addons-007439    │ jenkins │ v1.37.0 │ 07 Dec 25 22:42 UTC │ 07 Dec 25 22:43 UTC │
	│ start      │ -p dockerenv-401533 --driver=docker  --container-runtime=containerd             │ dockerenv-401533 │ jenkins │ v1.37.0 │ 07 Dec 25 22:43 UTC │ 07 Dec 25 22:43 UTC │
	│ docker-env │ --ssh-host --ssh-add -p dockerenv-401533                                        │ dockerenv-401533 │ jenkins │ v1.37.0 │ 07 Dec 25 22:43 UTC │ 07 Dec 25 22:43 UTC │
	└────────────┴─────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:43:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:43:00.121381  402229 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:43:00.121477  402229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:43:00.121480  402229 out.go:374] Setting ErrFile to fd 2...
	I1207 22:43:00.121483  402229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:43:00.121717  402229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 22:43:00.122151  402229 out.go:368] Setting JSON to false
	I1207 22:43:00.123039  402229 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5129,"bootTime":1765142251,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:43:00.123087  402229 start.go:143] virtualization: kvm guest
	I1207 22:43:00.124967  402229 out.go:179] * [dockerenv-401533] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:43:00.126065  402229 notify.go:221] Checking for updates...
	I1207 22:43:00.126074  402229 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:43:00.127098  402229 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:43:00.128205  402229 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 22:43:00.129333  402229 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 22:43:00.130369  402229 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:43:00.131381  402229 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:43:00.132471  402229 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:43:00.156688  402229 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:43:00.156780  402229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:43:00.211729  402229 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-07 22:43:00.201945949 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:43:00.211849  402229 docker.go:319] overlay module found
	I1207 22:43:00.214084  402229 out.go:179] * Using the docker driver based on user configuration
	I1207 22:43:00.215279  402229 start.go:309] selected driver: docker
	I1207 22:43:00.215287  402229 start.go:927] validating driver "docker" against <nil>
	I1207 22:43:00.215300  402229 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:43:00.215394  402229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:43:00.266798  402229 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-07 22:43:00.25738854 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:43:00.267010  402229 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 22:43:00.267732  402229 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1207 22:43:00.267926  402229 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 22:43:00.269557  402229 out.go:179] * Using Docker driver with root privileges
	I1207 22:43:00.270835  402229 cni.go:84] Creating CNI manager for ""
	I1207 22:43:00.270895  402229 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 22:43:00.270903  402229 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 22:43:00.270961  402229 start.go:353] cluster config:
	{Name:dockerenv-401533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:dockerenv-401533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:43:00.272208  402229 out.go:179] * Starting "dockerenv-401533" primary control-plane node in "dockerenv-401533" cluster
	I1207 22:43:00.273183  402229 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1207 22:43:00.274314  402229 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 22:43:00.275262  402229 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1207 22:43:00.275285  402229 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1207 22:43:00.275292  402229 cache.go:65] Caching tarball of preloaded images
	I1207 22:43:00.275365  402229 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 22:43:00.275383  402229 preload.go:238] Found /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1207 22:43:00.275389  402229 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1207 22:43:00.275773  402229 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/config.json ...
	I1207 22:43:00.275792  402229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/config.json: {Name:mkcc59c0b197f0a76391cd3976d7ff7606631cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:43:00.294721  402229 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 22:43:00.294732  402229 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 22:43:00.294748  402229 cache.go:243] Successfully downloaded all kic artifacts
	I1207 22:43:00.294780  402229 start.go:360] acquireMachinesLock for dockerenv-401533: {Name:mkfeadb1e22e08680a03e3f57c55f1e196ca62c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 22:43:00.294880  402229 start.go:364] duration metric: took 84.298µs to acquireMachinesLock for "dockerenv-401533"
	I1207 22:43:00.294900  402229 start.go:93] Provisioning new machine with config: &{Name:dockerenv-401533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:dockerenv-401533 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1207 22:43:00.294960  402229 start.go:125] createHost starting for "" (driver="docker")
	I1207 22:43:00.296538  402229 out.go:252] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I1207 22:43:00.296784  402229 start.go:159] libmachine.API.Create for "dockerenv-401533" (driver="docker")
	I1207 22:43:00.296817  402229 client.go:173] LocalClient.Create starting
	I1207 22:43:00.296880  402229 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem
	I1207 22:43:00.296907  402229 main.go:143] libmachine: Decoding PEM data...
	I1207 22:43:00.296922  402229 main.go:143] libmachine: Parsing certificate...
	I1207 22:43:00.296976  402229 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-372312/.minikube/certs/cert.pem
	I1207 22:43:00.296995  402229 main.go:143] libmachine: Decoding PEM data...
	I1207 22:43:00.297003  402229 main.go:143] libmachine: Parsing certificate...
	I1207 22:43:00.297321  402229 cli_runner.go:164] Run: docker network inspect dockerenv-401533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 22:43:00.312992  402229 cli_runner.go:211] docker network inspect dockerenv-401533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 22:43:00.313043  402229 network_create.go:284] running [docker network inspect dockerenv-401533] to gather additional debugging logs...
	I1207 22:43:00.313053  402229 cli_runner.go:164] Run: docker network inspect dockerenv-401533
	W1207 22:43:00.328432  402229 cli_runner.go:211] docker network inspect dockerenv-401533 returned with exit code 1
	I1207 22:43:00.328447  402229 network_create.go:287] error running [docker network inspect dockerenv-401533]: docker network inspect dockerenv-401533: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-401533 not found
	I1207 22:43:00.328455  402229 network_create.go:289] output of [docker network inspect dockerenv-401533]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-401533 not found
	
	** /stderr **
	I1207 22:43:00.328531  402229 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 22:43:00.343715  402229 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c99990}
	I1207 22:43:00.343748  402229 network_create.go:124] attempt to create docker network dockerenv-401533 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1207 22:43:00.343792  402229 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-401533 dockerenv-401533
	I1207 22:43:00.387347  402229 network_create.go:108] docker network dockerenv-401533 192.168.49.0/24 created
	I1207 22:43:00.387369  402229 kic.go:121] calculated static IP "192.168.49.2" for the "dockerenv-401533" container
	I1207 22:43:00.387430  402229 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 22:43:00.403368  402229 cli_runner.go:164] Run: docker volume create dockerenv-401533 --label name.minikube.sigs.k8s.io=dockerenv-401533 --label created_by.minikube.sigs.k8s.io=true
	I1207 22:43:00.419959  402229 oci.go:103] Successfully created a docker volume dockerenv-401533
	I1207 22:43:00.420013  402229 cli_runner.go:164] Run: docker run --rm --name dockerenv-401533-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-401533 --entrypoint /usr/bin/test -v dockerenv-401533:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1207 22:43:00.797363  402229 oci.go:107] Successfully prepared a docker volume dockerenv-401533
	I1207 22:43:00.797433  402229 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1207 22:43:00.797440  402229 kic.go:194] Starting extracting preloaded images to volume ...
	I1207 22:43:00.797509  402229 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-401533:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1207 22:43:04.529242  402229 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-401533:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.731651849s)
	I1207 22:43:04.529267  402229 kic.go:203] duration metric: took 3.731823453s to extract preloaded images to volume ...
	W1207 22:43:04.529404  402229 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1207 22:43:04.529426  402229 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1207 22:43:04.529471  402229 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 22:43:04.583782  402229 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-401533 --name dockerenv-401533 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-401533 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-401533 --network dockerenv-401533 --ip 192.168.49.2 --volume dockerenv-401533:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1207 22:43:04.835663  402229 cli_runner.go:164] Run: docker container inspect dockerenv-401533 --format={{.State.Running}}
	I1207 22:43:04.853387  402229 cli_runner.go:164] Run: docker container inspect dockerenv-401533 --format={{.State.Status}}
	I1207 22:43:04.870377  402229 cli_runner.go:164] Run: docker exec dockerenv-401533 stat /var/lib/dpkg/alternatives/iptables
	I1207 22:43:04.919545  402229 oci.go:144] the created container "dockerenv-401533" has a running status.
	I1207 22:43:04.919580  402229 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-372312/.minikube/machines/dockerenv-401533/id_rsa...
	I1207 22:43:05.039066  402229 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-372312/.minikube/machines/dockerenv-401533/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 22:43:05.065440  402229 cli_runner.go:164] Run: docker container inspect dockerenv-401533 --format={{.State.Status}}
	I1207 22:43:05.085237  402229 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 22:43:05.085262  402229 kic_runner.go:114] Args: [docker exec --privileged dockerenv-401533 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 22:43:05.129708  402229 cli_runner.go:164] Run: docker container inspect dockerenv-401533 --format={{.State.Status}}
	I1207 22:43:05.151008  402229 machine.go:94] provisionDockerMachine start ...
	I1207 22:43:05.151128  402229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-401533
	I1207 22:43:05.171969  402229 main.go:143] libmachine: Using SSH client type: native
	I1207 22:43:05.172298  402229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I1207 22:43:05.172307  402229 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 22:43:05.304164  402229 main.go:143] libmachine: SSH cmd err, output: <nil>: dockerenv-401533
	
	I1207 22:43:05.304183  402229 ubuntu.go:182] provisioning hostname "dockerenv-401533"
	I1207 22:43:05.304248  402229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-401533
	I1207 22:43:05.322184  402229 main.go:143] libmachine: Using SSH client type: native
	I1207 22:43:05.322406  402229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I1207 22:43:05.322414  402229 main.go:143] libmachine: About to run SSH command:
	sudo hostname dockerenv-401533 && echo "dockerenv-401533" | sudo tee /etc/hostname
	I1207 22:43:05.457628  402229 main.go:143] libmachine: SSH cmd err, output: <nil>: dockerenv-401533
	
	I1207 22:43:05.457750  402229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-401533
	I1207 22:43:05.476273  402229 main.go:143] libmachine: Using SSH client type: native
	I1207 22:43:05.476481  402229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I1207 22:43:05.476492  402229 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-401533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-401533/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-401533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 22:43:05.600572  402229 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 22:43:05.600595  402229 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-372312/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-372312/.minikube}
	I1207 22:43:05.600619  402229 ubuntu.go:190] setting up certificates
	I1207 22:43:05.600631  402229 provision.go:84] configureAuth start
	I1207 22:43:05.600731  402229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-401533
	I1207 22:43:05.617671  402229 provision.go:143] copyHostCerts
	I1207 22:43:05.617733  402229 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-372312/.minikube/cert.pem, removing ...
	I1207 22:43:05.617742  402229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-372312/.minikube/cert.pem
	I1207 22:43:05.617815  402229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-372312/.minikube/cert.pem (1123 bytes)
	I1207 22:43:05.617922  402229 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-372312/.minikube/key.pem, removing ...
	I1207 22:43:05.617928  402229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-372312/.minikube/key.pem
	I1207 22:43:05.617967  402229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-372312/.minikube/key.pem (1679 bytes)
	I1207 22:43:05.618055  402229 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-372312/.minikube/ca.pem, removing ...
	I1207 22:43:05.618060  402229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-372312/.minikube/ca.pem
	I1207 22:43:05.618094  402229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-372312/.minikube/ca.pem (1082 bytes)
	I1207 22:43:05.618172  402229 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-372312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca-key.pem org=jenkins.dockerenv-401533 san=[127.0.0.1 192.168.49.2 dockerenv-401533 localhost minikube]
	I1207 22:43:05.648148  402229 provision.go:177] copyRemoteCerts
	I1207 22:43:05.648182  402229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 22:43:05.648212  402229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-401533
	I1207 22:43:05.664686  402229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/dockerenv-401533/id_rsa Username:docker}
	I1207 22:43:05.756250  402229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 22:43:05.774340  402229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 22:43:05.790947  402229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1207 22:43:05.807258  402229 provision.go:87] duration metric: took 206.616167ms to configureAuth
	I1207 22:43:05.807273  402229 ubuntu.go:206] setting minikube options for container-runtime
	I1207 22:43:05.807416  402229 config.go:182] Loaded profile config "dockerenv-401533": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 22:43:05.807421  402229 machine.go:97] duration metric: took 656.401498ms to provisionDockerMachine
	I1207 22:43:05.807425  402229 client.go:176] duration metric: took 5.510604574s to LocalClient.Create
	I1207 22:43:05.807441  402229 start.go:167] duration metric: took 5.51065966s to libmachine.API.Create "dockerenv-401533"
	I1207 22:43:05.807447  402229 start.go:293] postStartSetup for "dockerenv-401533" (driver="docker")
	I1207 22:43:05.807453  402229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 22:43:05.807497  402229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 22:43:05.807524  402229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-401533
	I1207 22:43:05.825302  402229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/dockerenv-401533/id_rsa Username:docker}
	I1207 22:43:05.918231  402229 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 22:43:05.921520  402229 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 22:43:05.921534  402229 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 22:43:05.921547  402229 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-372312/.minikube/addons for local assets ...
	I1207 22:43:05.921603  402229 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-372312/.minikube/files for local assets ...
	I1207 22:43:05.921622  402229 start.go:296] duration metric: took 114.17101ms for postStartSetup
	I1207 22:43:05.921897  402229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-401533
	I1207 22:43:05.940302  402229 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/config.json ...
	I1207 22:43:05.940495  402229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 22:43:05.940533  402229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-401533
	I1207 22:43:05.956728  402229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/dockerenv-401533/id_rsa Username:docker}
	I1207 22:43:06.045154  402229 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 22:43:06.049392  402229 start.go:128] duration metric: took 5.754418842s to createHost
	I1207 22:43:06.049409  402229 start.go:83] releasing machines lock for "dockerenv-401533", held for 5.754521287s
	I1207 22:43:06.049471  402229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-401533
	I1207 22:43:06.066044  402229 ssh_runner.go:195] Run: cat /version.json
	I1207 22:43:06.066077  402229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-401533
	I1207 22:43:06.066122  402229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 22:43:06.066187  402229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-401533
	I1207 22:43:06.082998  402229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/dockerenv-401533/id_rsa Username:docker}
	I1207 22:43:06.083346  402229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/dockerenv-401533/id_rsa Username:docker}
	I1207 22:43:06.224421  402229 ssh_runner.go:195] Run: systemctl --version
	I1207 22:43:06.230393  402229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 22:43:06.234509  402229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 22:43:06.234575  402229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 22:43:06.257512  402229 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 22:43:06.257522  402229 start.go:496] detecting cgroup driver to use...
	I1207 22:43:06.257548  402229 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 22:43:06.257591  402229 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1207 22:43:06.270520  402229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1207 22:43:06.281477  402229 docker.go:218] disabling cri-docker service (if available) ...
	I1207 22:43:06.281509  402229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 22:43:06.295782  402229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 22:43:06.311091  402229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 22:43:06.383845  402229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 22:43:06.461378  402229 docker.go:234] disabling docker service ...
	I1207 22:43:06.461425  402229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 22:43:06.479650  402229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 22:43:06.491103  402229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 22:43:06.569357  402229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 22:43:06.646020  402229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 22:43:06.657160  402229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 22:43:06.670114  402229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1207 22:43:06.679240  402229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1207 22:43:06.687176  402229 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1207 22:43:06.687212  402229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1207 22:43:06.695144  402229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 22:43:06.702937  402229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1207 22:43:06.710773  402229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 22:43:06.718526  402229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 22:43:06.725855  402229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1207 22:43:06.733606  402229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1207 22:43:06.741399  402229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1207 22:43:06.749307  402229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 22:43:06.755957  402229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 22:43:06.762620  402229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:43:06.842202  402229 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1207 22:43:06.933929  402229 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1207 22:43:06.933985  402229 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1207 22:43:06.937934  402229 start.go:564] Will wait 60s for crictl version
	I1207 22:43:06.937982  402229 ssh_runner.go:195] Run: which crictl
	I1207 22:43:06.941387  402229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 22:43:06.963751  402229 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1207 22:43:06.963797  402229 ssh_runner.go:195] Run: containerd --version
	I1207 22:43:06.983794  402229 ssh_runner.go:195] Run: containerd --version
	I1207 22:43:07.005845  402229 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1207 22:43:07.006945  402229 cli_runner.go:164] Run: docker network inspect dockerenv-401533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 22:43:07.023357  402229 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 22:43:07.027099  402229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 22:43:07.036601  402229 kubeadm.go:884] updating cluster {Name:dockerenv-401533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:dockerenv-401533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 22:43:07.036700  402229 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1207 22:43:07.036743  402229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 22:43:07.060499  402229 containerd.go:627] all images are preloaded for containerd runtime.
	I1207 22:43:07.060508  402229 containerd.go:534] Images already preloaded, skipping extraction
	I1207 22:43:07.060551  402229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 22:43:07.082818  402229 containerd.go:627] all images are preloaded for containerd runtime.
	I1207 22:43:07.082837  402229 cache_images.go:86] Images are preloaded, skipping loading
	I1207 22:43:07.082844  402229 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 containerd true true} ...
	I1207 22:43:07.082944  402229 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=dockerenv-401533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:dockerenv-401533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 22:43:07.082988  402229 ssh_runner.go:195] Run: sudo crictl info
	I1207 22:43:07.106055  402229 cni.go:84] Creating CNI manager for ""
	I1207 22:43:07.106063  402229 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 22:43:07.106078  402229 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 22:43:07.106098  402229 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-401533 NodeName:dockerenv-401533 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 22:43:07.106199  402229 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-401533"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 22:43:07.106248  402229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 22:43:07.113338  402229 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 22:43:07.113384  402229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 22:43:07.120317  402229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1207 22:43:07.131645  402229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 22:43:07.145094  402229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1207 22:43:07.156224  402229 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1207 22:43:07.159343  402229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 22:43:07.168158  402229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:43:07.241952  402229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 22:43:07.266504  402229 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533 for IP: 192.168.49.2
	I1207 22:43:07.266516  402229 certs.go:195] generating shared ca certs ...
	I1207 22:43:07.266533  402229 certs.go:227] acquiring lock for ca certs: {Name:mkfbf8cc1b70595007aee06919ab1fb852308b54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:43:07.266721  402229 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-372312/.minikube/ca.key
	I1207 22:43:07.266788  402229 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.key
	I1207 22:43:07.266797  402229 certs.go:257] generating profile certs ...
	I1207 22:43:07.266851  402229 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/client.key
	I1207 22:43:07.266860  402229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/client.crt with IP's: []
	I1207 22:43:07.372070  402229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/client.crt ...
	I1207 22:43:07.372088  402229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/client.crt: {Name:mk8a3dab47226f128c7b6b94efa784340e0d2eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:43:07.372243  402229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/client.key ...
	I1207 22:43:07.372251  402229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/client.key: {Name:mkca755688a9c548fe548f4052057d388c526d1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:43:07.372327  402229 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/apiserver.key.6af568f0
	I1207 22:43:07.372337  402229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/apiserver.crt.6af568f0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1207 22:43:07.438860  402229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/apiserver.crt.6af568f0 ...
	I1207 22:43:07.438871  402229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/apiserver.crt.6af568f0: {Name:mk78ac236e304f8a5691d943514aa90c5ab26ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:43:07.438991  402229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/apiserver.key.6af568f0 ...
	I1207 22:43:07.439007  402229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/apiserver.key.6af568f0: {Name:mkdd8c4b0c06b2c69af5056bbf454627019832c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:43:07.439071  402229 certs.go:382] copying /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/apiserver.crt.6af568f0 -> /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/apiserver.crt
	I1207 22:43:07.439134  402229 certs.go:386] copying /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/apiserver.key.6af568f0 -> /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/apiserver.key
	I1207 22:43:07.439180  402229 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/proxy-client.key
	I1207 22:43:07.439190  402229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/proxy-client.crt with IP's: []
	I1207 22:43:07.465732  402229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/proxy-client.crt ...
	I1207 22:43:07.465742  402229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/proxy-client.crt: {Name:mkde20a2937d4bd16375fb3b0a89be2c339efa4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:43:07.465845  402229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/proxy-client.key ...
	I1207 22:43:07.465853  402229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/proxy-client.key: {Name:mk8993da0c4b30875e577fddd92ca76f55291b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:43:07.466010  402229 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 22:43:07.466038  402229 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem (1082 bytes)
	I1207 22:43:07.466058  402229 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/cert.pem (1123 bytes)
	I1207 22:43:07.466080  402229 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/key.pem (1679 bytes)
	I1207 22:43:07.466712  402229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 22:43:07.483966  402229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 22:43:07.500600  402229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 22:43:07.516414  402229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1207 22:43:07.532372  402229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1207 22:43:07.548086  402229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 22:43:07.563590  402229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 22:43:07.579499  402229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/dockerenv-401533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 22:43:07.595236  402229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 22:43:07.613653  402229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 22:43:07.625085  402229 ssh_runner.go:195] Run: openssl version
	I1207 22:43:07.630761  402229 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:43:07.637132  402229 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 22:43:07.645795  402229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:43:07.649161  402229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:31 /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:43:07.649195  402229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:43:07.682088  402229 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 22:43:07.688768  402229 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 22:43:07.695556  402229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 22:43:07.698688  402229 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 22:43:07.698744  402229 kubeadm.go:401] StartCluster: {Name:dockerenv-401533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:dockerenv-401533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:43:07.698809  402229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1207 22:43:07.698858  402229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:43:07.723552  402229 cri.go:89] found id: ""
	I1207 22:43:07.723595  402229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 22:43:07.730926  402229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 22:43:07.737974  402229 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 22:43:07.738017  402229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 22:43:07.744890  402229 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 22:43:07.744898  402229 kubeadm.go:158] found existing configuration files:
	
	I1207 22:43:07.744927  402229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 22:43:07.751737  402229 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 22:43:07.751770  402229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 22:43:07.758269  402229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 22:43:07.764925  402229 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 22:43:07.764956  402229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 22:43:07.771512  402229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 22:43:07.778272  402229 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 22:43:07.778304  402229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 22:43:07.784976  402229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 22:43:07.791777  402229 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 22:43:07.791815  402229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 22:43:07.798333  402229 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 22:43:07.832755  402229 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1207 22:43:07.832805  402229 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 22:43:07.850633  402229 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 22:43:07.850741  402229 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 22:43:07.850785  402229 kubeadm.go:319] OS: Linux
	I1207 22:43:07.850827  402229 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 22:43:07.850865  402229 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 22:43:07.850942  402229 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 22:43:07.850990  402229 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 22:43:07.851026  402229 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 22:43:07.851080  402229 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 22:43:07.851143  402229 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 22:43:07.851179  402229 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 22:43:07.905047  402229 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 22:43:07.905163  402229 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 22:43:07.905251  402229 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 22:43:07.910427  402229 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 22:43:07.913792  402229 out.go:252]   - Generating certificates and keys ...
	I1207 22:43:07.913875  402229 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 22:43:07.913953  402229 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 22:43:08.102582  402229 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 22:43:08.306129  402229 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 22:43:08.567887  402229 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 22:43:08.729625  402229 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 22:43:08.921110  402229 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 22:43:08.921283  402229 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [dockerenv-401533 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1207 22:43:09.324126  402229 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 22:43:09.324313  402229 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-401533 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1207 22:43:09.411440  402229 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 22:43:09.502510  402229 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 22:43:09.923894  402229 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 22:43:09.924053  402229 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 22:43:10.147482  402229 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 22:43:10.703428  402229 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 22:43:10.986318  402229 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 22:43:11.086885  402229 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 22:43:11.260078  402229 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 22:43:11.260537  402229 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 22:43:11.264243  402229 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 22:43:11.265737  402229 out.go:252]   - Booting up control plane ...
	I1207 22:43:11.265811  402229 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 22:43:11.265894  402229 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 22:43:11.266415  402229 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 22:43:11.280203  402229 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 22:43:11.280341  402229 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 22:43:11.286369  402229 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 22:43:11.286610  402229 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 22:43:11.286659  402229 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 22:43:11.383321  402229 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 22:43:11.383507  402229 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1207 22:43:12.884204  402229 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50101381s
	I1207 22:43:12.887099  402229 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 22:43:12.887226  402229 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1207 22:43:12.887306  402229 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 22:43:12.887371  402229 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 22:43:14.391926  402229 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.504706555s
	I1207 22:43:14.807413  402229 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.920273706s
	I1207 22:43:16.389097  402229 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501925424s
	I1207 22:43:16.404040  402229 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 22:43:16.412808  402229 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 22:43:16.419755  402229 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 22:43:16.420046  402229 kubeadm.go:319] [mark-control-plane] Marking the node dockerenv-401533 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 22:43:16.426801  402229 kubeadm.go:319] [bootstrap-token] Using token: 1yaiwt.xx05ckiy38f0bqxy
	I1207 22:43:16.427960  402229 out.go:252]   - Configuring RBAC rules ...
	I1207 22:43:16.428105  402229 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 22:43:16.431544  402229 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 22:43:16.435710  402229 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 22:43:16.437957  402229 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 22:43:16.440026  402229 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 22:43:16.442219  402229 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 22:43:16.794071  402229 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 22:43:17.207721  402229 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1207 22:43:17.793587  402229 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1207 22:43:17.794858  402229 kubeadm.go:319] 
	I1207 22:43:17.794962  402229 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1207 22:43:17.794970  402229 kubeadm.go:319] 
	I1207 22:43:17.795078  402229 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1207 22:43:17.795083  402229 kubeadm.go:319] 
	I1207 22:43:17.795114  402229 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1207 22:43:17.795208  402229 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 22:43:17.795267  402229 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 22:43:17.795285  402229 kubeadm.go:319] 
	I1207 22:43:17.795366  402229 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1207 22:43:17.795371  402229 kubeadm.go:319] 
	I1207 22:43:17.795429  402229 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 22:43:17.795433  402229 kubeadm.go:319] 
	I1207 22:43:17.795509  402229 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1207 22:43:17.795617  402229 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 22:43:17.795745  402229 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 22:43:17.795750  402229 kubeadm.go:319] 
	I1207 22:43:17.795871  402229 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 22:43:17.795989  402229 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1207 22:43:17.795994  402229 kubeadm.go:319] 
	I1207 22:43:17.796110  402229 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1yaiwt.xx05ckiy38f0bqxy \
	I1207 22:43:17.796244  402229 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:85ac6ff048e2b2646b65f112736b58cb81d3dd5c10b3ad35b3c1aa0187f127e5 \
	I1207 22:43:17.796272  402229 kubeadm.go:319] 	--control-plane 
	I1207 22:43:17.796276  402229 kubeadm.go:319] 
	I1207 22:43:17.796382  402229 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1207 22:43:17.796391  402229 kubeadm.go:319] 
	I1207 22:43:17.796462  402229 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1yaiwt.xx05ckiy38f0bqxy \
	I1207 22:43:17.796550  402229 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:85ac6ff048e2b2646b65f112736b58cb81d3dd5c10b3ad35b3c1aa0187f127e5 
	I1207 22:43:17.798644  402229 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1207 22:43:17.798782  402229 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 22:43:17.798805  402229 cni.go:84] Creating CNI manager for ""
	I1207 22:43:17.798810  402229 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 22:43:17.800857  402229 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1207 22:43:17.801863  402229 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1207 22:43:17.806047  402229 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1207 22:43:17.806057  402229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1207 22:43:17.820564  402229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 22:43:18.011438  402229 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 22:43:18.011543  402229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:43:18.011548  402229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes dockerenv-401533 minikube.k8s.io/updated_at=2025_12_07T22_43_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=dockerenv-401533 minikube.k8s.io/primary=true
	I1207 22:43:18.020685  402229 ops.go:34] apiserver oom_adj: -16
	I1207 22:43:18.101133  402229 kubeadm.go:1114] duration metric: took 89.656071ms to wait for elevateKubeSystemPrivileges
	I1207 22:43:18.101157  402229 kubeadm.go:403] duration metric: took 10.402418559s to StartCluster
	I1207 22:43:18.101179  402229 settings.go:142] acquiring lock: {Name:mk7216554e80629b12e2915c4ff3500dbf3bd589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:43:18.101253  402229 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 22:43:18.101920  402229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/kubeconfig: {Name:mkfa57fc45c954633f03c454814d9663bdf56b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:43:18.102135  402229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 22:43:18.102141  402229 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1207 22:43:18.102215  402229 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 22:43:18.102307  402229 addons.go:70] Setting storage-provisioner=true in profile "dockerenv-401533"
	I1207 22:43:18.102324  402229 addons.go:70] Setting default-storageclass=true in profile "dockerenv-401533"
	I1207 22:43:18.102331  402229 addons.go:239] Setting addon storage-provisioner=true in "dockerenv-401533"
	I1207 22:43:18.102353  402229 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-401533"
	I1207 22:43:18.102361  402229 host.go:66] Checking if "dockerenv-401533" exists ...
	I1207 22:43:18.102356  402229 config.go:182] Loaded profile config "dockerenv-401533": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 22:43:18.102721  402229 cli_runner.go:164] Run: docker container inspect dockerenv-401533 --format={{.State.Status}}
	I1207 22:43:18.102925  402229 cli_runner.go:164] Run: docker container inspect dockerenv-401533 --format={{.State.Status}}
	I1207 22:43:18.104393  402229 out.go:179] * Verifying Kubernetes components...
	I1207 22:43:18.105441  402229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:43:18.127005  402229 addons.go:239] Setting addon default-storageclass=true in "dockerenv-401533"
	I1207 22:43:18.127053  402229 host.go:66] Checking if "dockerenv-401533" exists ...
	I1207 22:43:18.127493  402229 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 22:43:18.127512  402229 cli_runner.go:164] Run: docker container inspect dockerenv-401533 --format={{.State.Status}}
	I1207 22:43:18.128862  402229 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 22:43:18.128871  402229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 22:43:18.128913  402229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-401533
	I1207 22:43:18.154859  402229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/dockerenv-401533/id_rsa Username:docker}
	I1207 22:43:18.157246  402229 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 22:43:18.157260  402229 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 22:43:18.157333  402229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-401533
	I1207 22:43:18.180498  402229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/dockerenv-401533/id_rsa Username:docker}
	I1207 22:43:18.191813  402229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 22:43:18.236507  402229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 22:43:18.257483  402229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 22:43:18.283765  402229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 22:43:18.353884  402229 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1207 22:43:18.354720  402229 api_server.go:52] waiting for apiserver process to appear ...
	I1207 22:43:18.354769  402229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 22:43:18.512200  402229 api_server.go:72] duration metric: took 410.027437ms to wait for apiserver process to appear ...
	I1207 22:43:18.512214  402229 api_server.go:88] waiting for apiserver healthz status ...
	I1207 22:43:18.512231  402229 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1207 22:43:18.517818  402229 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1207 22:43:18.518598  402229 api_server.go:141] control plane version: v1.34.2
	I1207 22:43:18.518611  402229 api_server.go:131] duration metric: took 6.393035ms to wait for apiserver health ...
	I1207 22:43:18.518619  402229 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 22:43:18.519482  402229 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1207 22:43:18.520428  402229 addons.go:530] duration metric: took 418.208368ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1207 22:43:18.520680  402229 system_pods.go:59] 5 kube-system pods found
	I1207 22:43:18.520711  402229 system_pods.go:61] "etcd-dockerenv-401533" [43a81990-90a1-4b17-a07d-0a2c67bf2ad5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 22:43:18.520721  402229 system_pods.go:61] "kube-apiserver-dockerenv-401533" [ddcab095-2efa-44ba-9bba-006b3f9ce82d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 22:43:18.520730  402229 system_pods.go:61] "kube-controller-manager-dockerenv-401533" [b42d7f65-5425-4861-ab90-81e2f6e9805e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 22:43:18.520737  402229 system_pods.go:61] "kube-scheduler-dockerenv-401533" [df6a431a-8f2d-448e-96a3-1dd4dae607c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 22:43:18.520741  402229 system_pods.go:61] "storage-provisioner" [bdca3106-a59a-49e7-92aa-73466e3bf6b4] Pending
	I1207 22:43:18.520746  402229 system_pods.go:74] duration metric: took 2.122372ms to wait for pod list to return data ...
	I1207 22:43:18.520755  402229 kubeadm.go:587] duration metric: took 418.587032ms to wait for: map[apiserver:true system_pods:true]
	I1207 22:43:18.520767  402229 node_conditions.go:102] verifying NodePressure condition ...
	I1207 22:43:18.522707  402229 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 22:43:18.522732  402229 node_conditions.go:123] node cpu capacity is 8
	I1207 22:43:18.522753  402229 node_conditions.go:105] duration metric: took 1.982646ms to run NodePressure ...
	I1207 22:43:18.522767  402229 start.go:242] waiting for startup goroutines ...
	I1207 22:43:18.856806  402229 kapi.go:214] "coredns" deployment in "kube-system" namespace and "dockerenv-401533" context rescaled to 1 replicas
	I1207 22:43:18.856833  402229 start.go:247] waiting for cluster config update ...
	I1207 22:43:18.856843  402229 start.go:256] writing updated cluster config ...
	I1207 22:43:18.857075  402229 ssh_runner.go:195] Run: rm -f paused
	I1207 22:43:18.904771  402229 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 22:43:18.907164  402229 out.go:179] * Done! kubectl is now configured to use "dockerenv-401533" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	5ae53eda61c37       409467f978b4a       9 seconds ago       Running             kindnet-cni               0                   152a8d15155b2       kindnet-djfqb                              kube-system
	f5cfaa4b9cc50       8aa150647e88a       9 seconds ago       Running             kube-proxy                0                   add57776a7937       kube-proxy-pnwlz                           kube-system
	a873f48039dcc       88320b5498ff2       19 seconds ago      Running             kube-scheduler            0                   d77c3e712f953       kube-scheduler-dockerenv-401533            kube-system
	27f91dc0ce8d7       01e8bacf0f500       19 seconds ago      Running             kube-controller-manager   0                   7898693c2659e       kube-controller-manager-dockerenv-401533   kube-system
	14adf4d79b085       a5f569d49a979       19 seconds ago      Running             kube-apiserver            0                   065d378a92799       kube-apiserver-dockerenv-401533            kube-system
	c80e10ed376cb       a3e246e9556e9       19 seconds ago      Running             etcd                      0                   7bf5eb9d95fb4       etcd-dockerenv-401533                      kube-system
	
	
	==> containerd <==
	Dec 07 22:43:23 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:23.075167792Z" level=info msg="CreateContainer within sandbox \"add57776a7937e861e4e9550c33c496c86d7b48e8a91efaabedce05391097808\" for container name:\"kube-proxy\""
	Dec 07 22:43:23 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:23.081496402Z" level=info msg="Container f5cfaa4b9cc50afcc82e1af2073cd767977d2868dfe8caa0fdfdb0b2b964b7c6: CDI devices from CRI Config.CDIDevices: []"
	Dec 07 22:43:23 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:23.088222666Z" level=info msg="CreateContainer within sandbox \"add57776a7937e861e4e9550c33c496c86d7b48e8a91efaabedce05391097808\" for name:\"kube-proxy\" returns container id \"f5cfaa4b9cc50afcc82e1af2073cd767977d2868dfe8caa0fdfdb0b2b964b7c6\""
	Dec 07 22:43:23 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:23.088685413Z" level=info msg="StartContainer for \"f5cfaa4b9cc50afcc82e1af2073cd767977d2868dfe8caa0fdfdb0b2b964b7c6\""
	Dec 07 22:43:23 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:23.090456673Z" level=info msg="connecting to shim f5cfaa4b9cc50afcc82e1af2073cd767977d2868dfe8caa0fdfdb0b2b964b7c6" address="unix:///run/containerd/s/cbcf104d514ed4f992aa30159308c0058e5c9d67661a6108b4c260d4baea217e" protocol=ttrpc version=3
	Dec 07 22:43:23 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:23.149640670Z" level=info msg="StartContainer for \"f5cfaa4b9cc50afcc82e1af2073cd767977d2868dfe8caa0fdfdb0b2b964b7c6\" returns successfully"
	Dec 07 22:43:23 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:23.287853028Z" level=info msg="RunPodSandbox for name:\"kindnet-djfqb\"  uid:\"574b07c6-1923-4e0b-a8b1-6916ef1033a8\"  namespace:\"kube-system\" returns sandbox id \"152a8d15155b21143d67b48a910107b12da99b2b7ca6164c636038063ae626af\""
	Dec 07 22:43:23 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:23.292073922Z" level=info msg="CreateContainer within sandbox \"152a8d15155b21143d67b48a910107b12da99b2b7ca6164c636038063ae626af\" for container name:\"kindnet-cni\""
	Dec 07 22:43:23 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:23.297138117Z" level=info msg="Container 5ae53eda61c37fa3e1a5565843038570929f5a246801dbbeac1fff28e04c565f: CDI devices from CRI Config.CDIDevices: []"
	Dec 07 22:43:23 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:23.302634625Z" level=info msg="CreateContainer within sandbox \"152a8d15155b21143d67b48a910107b12da99b2b7ca6164c636038063ae626af\" for name:\"kindnet-cni\" returns container id \"5ae53eda61c37fa3e1a5565843038570929f5a246801dbbeac1fff28e04c565f\""
	Dec 07 22:43:23 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:23.303102650Z" level=info msg="StartContainer for \"5ae53eda61c37fa3e1a5565843038570929f5a246801dbbeac1fff28e04c565f\""
	Dec 07 22:43:23 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:23.304005829Z" level=info msg="connecting to shim 5ae53eda61c37fa3e1a5565843038570929f5a246801dbbeac1fff28e04c565f" address="unix:///run/containerd/s/bae427eca26b793058eb75c3a06b1c4455a79853289ab934935a6d0d08a09c7e" protocol=ttrpc version=3
	Dec 07 22:43:23 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:23.474935360Z" level=info msg="StartContainer for \"5ae53eda61c37fa3e1a5565843038570929f5a246801dbbeac1fff28e04c565f\" returns successfully"
	Dec 07 22:43:27 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:27.063654827Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10a26eb591d9525788a5c086d6240570.slice/cri-containerd-14adf4d79b08578cae7f991243827f450f3137253794677231e32b4587f76bb2.scope/hugetlb.2MB.events\""
	Dec 07 22:43:27 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:27.063786881Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10a26eb591d9525788a5c086d6240570.slice/cri-containerd-14adf4d79b08578cae7f991243827f450f3137253794677231e32b4587f76bb2.scope/hugetlb.1GB.events\""
	Dec 07 22:43:27 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:27.064790726Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b6b032e_4c49_44ff_9a5a_dd9349142cfa.slice/cri-containerd-f5cfaa4b9cc50afcc82e1af2073cd767977d2868dfe8caa0fdfdb0b2b964b7c6.scope/hugetlb.2MB.events\""
	Dec 07 22:43:27 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:27.064956362Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b6b032e_4c49_44ff_9a5a_dd9349142cfa.slice/cri-containerd-f5cfaa4b9cc50afcc82e1af2073cd767977d2868dfe8caa0fdfdb0b2b964b7c6.scope/hugetlb.1GB.events\""
	Dec 07 22:43:27 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:27.066728644Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod574b07c6_1923_4e0b_a8b1_6916ef1033a8.slice/cri-containerd-5ae53eda61c37fa3e1a5565843038570929f5a246801dbbeac1fff28e04c565f.scope/hugetlb.2MB.events\""
	Dec 07 22:43:27 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:27.066865334Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod574b07c6_1923_4e0b_a8b1_6916ef1033a8.slice/cri-containerd-5ae53eda61c37fa3e1a5565843038570929f5a246801dbbeac1fff28e04c565f.scope/hugetlb.1GB.events\""
	Dec 07 22:43:27 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:27.068315808Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02236b9a3bf5295d4b17bfd4e29f4bd6.slice/cri-containerd-c80e10ed376cb6fe2befdd193bc9eff0b9c9707bb290dfd3710590e2586d50b7.scope/hugetlb.2MB.events\""
	Dec 07 22:43:27 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:27.068438510Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02236b9a3bf5295d4b17bfd4e29f4bd6.slice/cri-containerd-c80e10ed376cb6fe2befdd193bc9eff0b9c9707bb290dfd3710590e2586d50b7.scope/hugetlb.1GB.events\""
	Dec 07 22:43:27 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:27.069147421Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487ad504451b8bd428db93ce4dec39af.slice/cri-containerd-27f91dc0ce8d7ebb928c74aa895116e56008edf34ab73773f4335b589d94385d.scope/hugetlb.2MB.events\""
	Dec 07 22:43:27 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:27.069275969Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487ad504451b8bd428db93ce4dec39af.slice/cri-containerd-27f91dc0ce8d7ebb928c74aa895116e56008edf34ab73773f4335b589d94385d.scope/hugetlb.1GB.events\""
	Dec 07 22:43:27 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:27.069913218Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf189511dc6218856692bd27e203e004a.slice/cri-containerd-a873f48039dccd2d525b3859392e62754b0085c1f4451966bfe8a2769d66bcd8.scope/hugetlb.2MB.events\""
	Dec 07 22:43:27 dockerenv-401533 containerd[656]: time="2025-12-07T22:43:27.070015498Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf189511dc6218856692bd27e203e004a.slice/cri-containerd-a873f48039dccd2d525b3859392e62754b0085c1f4451966bfe8a2769d66bcd8.scope/hugetlb.1GB.events\""
	
	
	==> describe nodes <==
	Name:               dockerenv-401533
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=dockerenv-401533
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=dockerenv-401533
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T22_43_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 22:43:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-401533
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 22:43:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 22:43:17 +0000   Sun, 07 Dec 2025 22:43:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 22:43:17 +0000   Sun, 07 Dec 2025 22:43:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 22:43:17 +0000   Sun, 07 Dec 2025 22:43:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 07 Dec 2025 22:43:17 +0000   Sun, 07 Dec 2025 22:43:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-401533
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                574b81be-3126-49ca-a5ca-37a002d52686
	  Boot ID:                    60b18ead-4d86-44c7-8f9b-425eebb82278
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-dockerenv-401533                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         15s
	  kube-system                 kindnet-djfqb                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10s
	  kube-system                 kube-apiserver-dockerenv-401533             250m (3%)     0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 kube-controller-manager-dockerenv-401533    200m (2%)     0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 kube-proxy-pnwlz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 kube-scheduler-dockerenv-401533             100m (1%)     0 (0%)      0 (0%)           0 (0%)         15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9s                 kube-proxy       
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node dockerenv-401533 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node dockerenv-401533 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node dockerenv-401533 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15s                kubelet          Node dockerenv-401533 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15s                kubelet          Node dockerenv-401533 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15s                kubelet          Node dockerenv-401533 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11s                node-controller  Node dockerenv-401533 event: Registered Node dockerenv-401533 in Controller
	
	
	==> dmesg <==
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 21:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 9d 47 68 73 f8 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 51 85 40 99 2c 08 06
	[Dec 7 22:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +0.003775] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 fe e4 4c 66 38 08 06
	[ +24.208165] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	[Dec 7 22:26] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 26 10 e0 03 10 08 06
	[  +0.000351] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +6.093825] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 95 3b ef b3 e0 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e ac 25 ad 6e 00 08 06
	[ +21.488023] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e e7 41 9a 21 38 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	
	
	==> etcd [c80e10ed376cb6fe2befdd193bc9eff0b9c9707bb290dfd3710590e2586d50b7] <==
	{"level":"warn","ts":"2025-12-07T22:43:14.151900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.159818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.169424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.183558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.190808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.196959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.203216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.210950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.218626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.230815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.237940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.245370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.251618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.263831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.270035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.276363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.283188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.290338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.298281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.305283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.311386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.332945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.339135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.345347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:43:14.399503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43770","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:43:33 up  1:26,  0 user,  load average: 0.45, 0.51, 1.17
	Linux dockerenv-401533 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5ae53eda61c37fa3e1a5565843038570929f5a246801dbbeac1fff28e04c565f] <==
	I1207 22:43:23.671726       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 22:43:23.671990       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1207 22:43:23.672134       1 main.go:148] setting mtu 1500 for CNI 
	I1207 22:43:23.672153       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 22:43:23.672181       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T22:43:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 22:43:23.782133       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 22:43:23.782169       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 22:43:23.782182       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 22:43:23.782318       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 22:43:24.182628       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 22:43:24.182653       1 metrics.go:72] Registering metrics
	I1207 22:43:24.270149       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [14adf4d79b08578cae7f991243827f450f3137253794677231e32b4587f76bb2] <==
	I1207 22:43:14.859000       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1207 22:43:14.859181       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1207 22:43:14.859980       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 22:43:14.863561       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 22:43:14.864425       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1207 22:43:14.871051       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 22:43:14.871578       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1207 22:43:14.878337       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 22:43:15.762947       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1207 22:43:15.766398       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1207 22:43:15.766415       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 22:43:16.175010       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 22:43:16.206872       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 22:43:16.264964       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1207 22:43:16.270453       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1207 22:43:16.271450       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 22:43:16.274817       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 22:43:16.777933       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:43:17.199146       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 22:43:17.206940       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1207 22:43:17.213238       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 22:43:22.679406       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1207 22:43:22.780616       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 22:43:22.784283       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 22:43:22.829003       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [27f91dc0ce8d7ebb928c74aa895116e56008edf34ab73773f4335b589d94385d] <==
	I1207 22:43:21.740512       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="dockerenv-401533" podCIDRs=["10.244.0.0/24"]
	I1207 22:43:21.776410       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1207 22:43:21.776461       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 22:43:21.776471       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1207 22:43:21.776484       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1207 22:43:21.776484       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1207 22:43:21.776607       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1207 22:43:21.776657       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1207 22:43:21.776713       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1207 22:43:21.776801       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1207 22:43:21.777007       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1207 22:43:21.777071       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="dockerenv-401533"
	I1207 22:43:21.777118       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1207 22:43:21.777163       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1207 22:43:21.777222       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1207 22:43:21.777282       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1207 22:43:21.778011       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1207 22:43:21.778028       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1207 22:43:21.778076       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1207 22:43:21.778253       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1207 22:43:21.778971       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1207 22:43:21.780137       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 22:43:21.784271       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 22:43:21.786577       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1207 22:43:21.791879       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f5cfaa4b9cc50afcc82e1af2073cd767977d2868dfe8caa0fdfdb0b2b964b7c6] <==
	I1207 22:43:23.179133       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:43:23.242129       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 22:43:23.342714       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 22:43:23.342764       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:43:23.342872       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:43:23.361604       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:43:23.361644       1 server_linux.go:132] "Using iptables Proxier"
	I1207 22:43:23.366709       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:43:23.367035       1 server.go:527] "Version info" version="v1.34.2"
	I1207 22:43:23.367049       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:43:23.368080       1 config.go:200] "Starting service config controller"
	I1207 22:43:23.368098       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:43:23.368139       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:43:23.368164       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:43:23.368135       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:43:23.368195       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:43:23.368211       1 config.go:309] "Starting node config controller"
	I1207 22:43:23.368217       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:43:23.368224       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:43:23.468505       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:43:23.468516       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 22:43:23.468544       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a873f48039dccd2d525b3859392e62754b0085c1f4451966bfe8a2769d66bcd8] <==
	E1207 22:43:14.805430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:43:14.805434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 22:43:14.805464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:43:14.805464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1207 22:43:14.805528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 22:43:14.805560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 22:43:14.805595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1207 22:43:14.805638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:43:14.805206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:43:14.805717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1207 22:43:14.805723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 22:43:14.805689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:43:14.805761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:43:15.680081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1207 22:43:15.715308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 22:43:15.723184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:43:15.783491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:43:15.813785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:43:15.925306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 22:43:15.925304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:43:15.926092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 22:43:15.972268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:43:15.997363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1207 22:43:16.027612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1207 22:43:16.302534       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 07 22:43:17 dockerenv-401533 kubelet[1402]: I1207 22:43:17.204169    1402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10a26eb591d9525788a5c086d6240570-usr-local-share-ca-certificates\") pod \"kube-apiserver-dockerenv-401533\" (UID: \"10a26eb591d9525788a5c086d6240570\") " pod="kube-system/kube-apiserver-dockerenv-401533"
	Dec 07 22:43:17 dockerenv-401533 kubelet[1402]: I1207 22:43:17.997590    1402 apiserver.go:52] "Watching apiserver"
	Dec 07 22:43:18 dockerenv-401533 kubelet[1402]: I1207 22:43:18.003202    1402 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 07 22:43:18 dockerenv-401533 kubelet[1402]: I1207 22:43:18.036529    1402 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-dockerenv-401533"
	Dec 07 22:43:18 dockerenv-401533 kubelet[1402]: I1207 22:43:18.036834    1402 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-dockerenv-401533"
	Dec 07 22:43:18 dockerenv-401533 kubelet[1402]: I1207 22:43:18.036942    1402 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-dockerenv-401533"
	Dec 07 22:43:18 dockerenv-401533 kubelet[1402]: E1207 22:43:18.043893    1402 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-dockerenv-401533\" already exists" pod="kube-system/etcd-dockerenv-401533"
	Dec 07 22:43:18 dockerenv-401533 kubelet[1402]: E1207 22:43:18.044588    1402 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-dockerenv-401533\" already exists" pod="kube-system/kube-scheduler-dockerenv-401533"
	Dec 07 22:43:18 dockerenv-401533 kubelet[1402]: E1207 22:43:18.044611    1402 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-dockerenv-401533\" already exists" pod="kube-system/kube-controller-manager-dockerenv-401533"
	Dec 07 22:43:18 dockerenv-401533 kubelet[1402]: I1207 22:43:18.065940    1402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-dockerenv-401533" podStartSLOduration=1.065911468 podStartE2EDuration="1.065911468s" podCreationTimestamp="2025-12-07 22:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 22:43:18.057755732 +0000 UTC m=+1.113790638" watchObservedRunningTime="2025-12-07 22:43:18.065911468 +0000 UTC m=+1.121946305"
	Dec 07 22:43:18 dockerenv-401533 kubelet[1402]: I1207 22:43:18.076196    1402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-dockerenv-401533" podStartSLOduration=1.076174981 podStartE2EDuration="1.076174981s" podCreationTimestamp="2025-12-07 22:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 22:43:18.066020283 +0000 UTC m=+1.122055120" watchObservedRunningTime="2025-12-07 22:43:18.076174981 +0000 UTC m=+1.132209819"
	Dec 07 22:43:18 dockerenv-401533 kubelet[1402]: I1207 22:43:18.096081    1402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-dockerenv-401533" podStartSLOduration=1.096060981 podStartE2EDuration="1.096060981s" podCreationTimestamp="2025-12-07 22:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 22:43:18.076161238 +0000 UTC m=+1.132196076" watchObservedRunningTime="2025-12-07 22:43:18.096060981 +0000 UTC m=+1.152095821"
	Dec 07 22:43:18 dockerenv-401533 kubelet[1402]: I1207 22:43:18.096221    1402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-dockerenv-401533" podStartSLOduration=1.096215052 podStartE2EDuration="1.096215052s" podCreationTimestamp="2025-12-07 22:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 22:43:18.096162312 +0000 UTC m=+1.152197151" watchObservedRunningTime="2025-12-07 22:43:18.096215052 +0000 UTC m=+1.152249891"
	Dec 07 22:43:21 dockerenv-401533 kubelet[1402]: I1207 22:43:21.748342    1402 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 07 22:43:21 dockerenv-401533 kubelet[1402]: I1207 22:43:21.749036    1402 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 07 22:43:22 dockerenv-401533 kubelet[1402]: I1207 22:43:22.739778    1402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/574b07c6-1923-4e0b-a8b1-6916ef1033a8-cni-cfg\") pod \"kindnet-djfqb\" (UID: \"574b07c6-1923-4e0b-a8b1-6916ef1033a8\") " pod="kube-system/kindnet-djfqb"
	Dec 07 22:43:22 dockerenv-401533 kubelet[1402]: I1207 22:43:22.739812    1402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/574b07c6-1923-4e0b-a8b1-6916ef1033a8-xtables-lock\") pod \"kindnet-djfqb\" (UID: \"574b07c6-1923-4e0b-a8b1-6916ef1033a8\") " pod="kube-system/kindnet-djfqb"
	Dec 07 22:43:22 dockerenv-401533 kubelet[1402]: I1207 22:43:22.739831    1402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b6b032e-4c49-44ff-9a5a-dd9349142cfa-xtables-lock\") pod \"kube-proxy-pnwlz\" (UID: \"9b6b032e-4c49-44ff-9a5a-dd9349142cfa\") " pod="kube-system/kube-proxy-pnwlz"
	Dec 07 22:43:22 dockerenv-401533 kubelet[1402]: I1207 22:43:22.739846    1402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnqhl\" (UniqueName: \"kubernetes.io/projected/9b6b032e-4c49-44ff-9a5a-dd9349142cfa-kube-api-access-tnqhl\") pod \"kube-proxy-pnwlz\" (UID: \"9b6b032e-4c49-44ff-9a5a-dd9349142cfa\") " pod="kube-system/kube-proxy-pnwlz"
	Dec 07 22:43:22 dockerenv-401533 kubelet[1402]: I1207 22:43:22.739861    1402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/574b07c6-1923-4e0b-a8b1-6916ef1033a8-lib-modules\") pod \"kindnet-djfqb\" (UID: \"574b07c6-1923-4e0b-a8b1-6916ef1033a8\") " pod="kube-system/kindnet-djfqb"
	Dec 07 22:43:22 dockerenv-401533 kubelet[1402]: I1207 22:43:22.739878    1402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp2zp\" (UniqueName: \"kubernetes.io/projected/574b07c6-1923-4e0b-a8b1-6916ef1033a8-kube-api-access-lp2zp\") pod \"kindnet-djfqb\" (UID: \"574b07c6-1923-4e0b-a8b1-6916ef1033a8\") " pod="kube-system/kindnet-djfqb"
	Dec 07 22:43:22 dockerenv-401533 kubelet[1402]: I1207 22:43:22.739917    1402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b6b032e-4c49-44ff-9a5a-dd9349142cfa-kube-proxy\") pod \"kube-proxy-pnwlz\" (UID: \"9b6b032e-4c49-44ff-9a5a-dd9349142cfa\") " pod="kube-system/kube-proxy-pnwlz"
	Dec 07 22:43:22 dockerenv-401533 kubelet[1402]: I1207 22:43:22.739939    1402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b6b032e-4c49-44ff-9a5a-dd9349142cfa-lib-modules\") pod \"kube-proxy-pnwlz\" (UID: \"9b6b032e-4c49-44ff-9a5a-dd9349142cfa\") " pod="kube-system/kube-proxy-pnwlz"
	Dec 07 22:43:24 dockerenv-401533 kubelet[1402]: I1207 22:43:24.059160    1402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pnwlz" podStartSLOduration=2.059141718 podStartE2EDuration="2.059141718s" podCreationTimestamp="2025-12-07 22:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 22:43:24.059027472 +0000 UTC m=+7.115062311" watchObservedRunningTime="2025-12-07 22:43:24.059141718 +0000 UTC m=+7.115176555"
	Dec 07 22:43:26 dockerenv-401533 kubelet[1402]: I1207 22:43:26.272095    1402 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-djfqb" podStartSLOduration=4.272075827 podStartE2EDuration="4.272075827s" podCreationTimestamp="2025-12-07 22:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 22:43:24.067654865 +0000 UTC m=+7.123689704" watchObservedRunningTime="2025-12-07 22:43:26.272075827 +0000 UTC m=+9.328110663"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p dockerenv-401533 -n dockerenv-401533
helpers_test.go:269: (dbg) Run:  kubectl --context dockerenv-401533 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-227m6 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context dockerenv-401533 describe pod coredns-66bc5c9577-227m6 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context dockerenv-401533 describe pod coredns-66bc5c9577-227m6 storage-provisioner: exit status 1 (55.040392ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-227m6" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context dockerenv-401533 describe pod coredns-66bc5c9577-227m6 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-401533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-401533
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-401533: (1.928389348s)
--- FAIL: TestDockerEnvContainerd (35.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-463278 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-463278 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-463278 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-463278 --alsologtostderr -v=1] stderr:
I1207 22:46:04.101867  427264 out.go:360] Setting OutFile to fd 1 ...
I1207 22:46:04.102215  427264 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:46:04.102231  427264 out.go:374] Setting ErrFile to fd 2...
I1207 22:46:04.102257  427264 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:46:04.102583  427264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
I1207 22:46:04.102957  427264 mustload.go:66] Loading cluster: functional-463278
I1207 22:46:04.103355  427264 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1207 22:46:04.103786  427264 cli_runner.go:164] Run: docker container inspect functional-463278 --format={{.State.Status}}
I1207 22:46:04.123526  427264 host.go:66] Checking if "functional-463278" exists ...
I1207 22:46:04.124411  427264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1207 22:46:04.187472  427264 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-07 22:46:04.176284694 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1207 22:46:04.187622  427264 api_server.go:166] Checking apiserver status ...
I1207 22:46:04.187685  427264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1207 22:46:04.187764  427264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463278
I1207 22:46:04.212897  427264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-463278/id_rsa Username:docker}
I1207 22:46:04.315186  427264 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5080/cgroup
W1207 22:46:04.323289  427264 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5080/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1207 22:46:04.323331  427264 ssh_runner.go:195] Run: ls
I1207 22:46:04.326836  427264 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1207 22:46:04.331770  427264 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1207 22:46:04.331817  427264 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1207 22:46:04.331958  427264 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1207 22:46:04.331973  427264 addons.go:70] Setting dashboard=true in profile "functional-463278"
I1207 22:46:04.331984  427264 addons.go:239] Setting addon dashboard=true in "functional-463278"
I1207 22:46:04.332009  427264 host.go:66] Checking if "functional-463278" exists ...
I1207 22:46:04.332319  427264 cli_runner.go:164] Run: docker container inspect functional-463278 --format={{.State.Status}}
I1207 22:46:04.352729  427264 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1207 22:46:04.354068  427264 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1207 22:46:04.355598  427264 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1207 22:46:04.355621  427264 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1207 22:46:04.355711  427264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463278
I1207 22:46:04.380048  427264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-463278/id_rsa Username:docker}
I1207 22:46:04.487300  427264 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1207 22:46:04.487328  427264 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1207 22:46:04.501782  427264 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1207 22:46:04.501807  427264 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1207 22:46:04.517179  427264 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1207 22:46:04.517206  427264 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1207 22:46:04.531442  427264 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1207 22:46:04.531465  427264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1207 22:46:04.545625  427264 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1207 22:46:04.545648  427264 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1207 22:46:04.563622  427264 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1207 22:46:04.563644  427264 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1207 22:46:04.577776  427264 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1207 22:46:04.577802  427264 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1207 22:46:04.590685  427264 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1207 22:46:04.590731  427264 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1207 22:46:04.602818  427264 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1207 22:46:04.602836  427264 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1207 22:46:04.614888  427264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1207 22:46:05.060820  427264 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-463278 addons enable metrics-server

                                                
                                                
I1207 22:46:05.062043  427264 addons.go:202] Writing out "functional-463278" config to set dashboard=true...
W1207 22:46:05.062262  427264 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1207 22:46:05.062971  427264 kapi.go:59] client config for functional-463278: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.key", CAFile:"/home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1207 22:46:05.063432  427264 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1207 22:46:05.063450  427264 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1207 22:46:05.063458  427264 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1207 22:46:05.063464  427264 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1207 22:46:05.063469  427264 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1207 22:46:05.070248  427264 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  edafa242-1beb-476a-abfa-3156de424ab9 688 0 2025-12-07 22:46:05 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-07 22:46:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.105.100.228,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.105.100.228],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1207 22:46:05.070394  427264 out.go:285] * Launching proxy ...
* Launching proxy ...
I1207 22:46:05.070459  427264 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-463278 proxy --port 36195]
I1207 22:46:05.070764  427264 dashboard.go:159] Waiting for kubectl to output host:port ...
I1207 22:46:05.112259  427264 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1207 22:46:05.112322  427264 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1207 22:46:05.119941  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cf474846-8774-44e1-ac72-bf2a767df315] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0005d2000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000188780 TLS:<nil>}
I1207 22:46:05.120030  427264 retry.go:31] will retry after 85.755µs: Temporary Error: unexpected response code: 503
I1207 22:46:05.123111  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[63e366fc-e439-4da0-9dbf-f132b3291a85] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0008c3c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000544500 TLS:<nil>}
I1207 22:46:05.123158  427264 retry.go:31] will retry after 201.314µs: Temporary Error: unexpected response code: 503
I1207 22:46:05.126152  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4626deba-68fe-40d0-a3c5-aadd02d13f54] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0005cfac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001888c0 TLS:<nil>}
I1207 22:46:05.126202  427264 retry.go:31] will retry after 266.618µs: Temporary Error: unexpected response code: 503
I1207 22:46:05.129106  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9af55e9f-7966-44a9-affa-515d56ee89df] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0007ba080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000389680 TLS:<nil>}
I1207 22:46:05.129148  427264 retry.go:31] will retry after 239.394µs: Temporary Error: unexpected response code: 503
I1207 22:46:05.132081  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9ba3c110-c0b2-4177-8e59-02f0e9e2e646] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0005d3500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000188b40 TLS:<nil>}
I1207 22:46:05.132125  427264 retry.go:31] will retry after 643.93µs: Temporary Error: unexpected response code: 503
I1207 22:46:05.134904  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[415b7458-e56c-4f20-bde3-58fff0772553] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0005cfbc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005448c0 TLS:<nil>}
I1207 22:46:05.134948  427264 retry.go:31] will retry after 761.641µs: Temporary Error: unexpected response code: 503
I1207 22:46:05.137805  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c4599a48-5243-42e4-ab70-991410eb400d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0007ba180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003897c0 TLS:<nil>}
I1207 22:46:05.137844  427264 retry.go:31] will retry after 838.152µs: Temporary Error: unexpected response code: 503
I1207 22:46:05.140777  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ac54b6d3-5c67-4ae0-9f06-26ff761d18e4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0005cfcc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000189180 TLS:<nil>}
I1207 22:46:05.140810  427264 retry.go:31] will retry after 1.955041ms: Temporary Error: unexpected response code: 503
I1207 22:46:05.144780  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fc80c2d0-c0b3-44b6-ba76-bad4c7b9caa8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0005d38c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000389900 TLS:<nil>}
I1207 22:46:05.144824  427264 retry.go:31] will retry after 1.921052ms: Temporary Error: unexpected response code: 503
I1207 22:46:05.148708  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cc373af2-a8f2-404c-bb9f-6c6496587a2c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0005cfdc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000544a00 TLS:<nil>}
I1207 22:46:05.148779  427264 retry.go:31] will retry after 5.197659ms: Temporary Error: unexpected response code: 503
I1207 22:46:05.156519  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[515b7691-044c-48ab-a367-30d9a1638333] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0007ba2c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000389a40 TLS:<nil>}
I1207 22:46:05.156560  427264 retry.go:31] will retry after 8.04917ms: Temporary Error: unexpected response code: 503
I1207 22:46:05.166488  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[54fbc212-bfe0-47d8-8e38-0079091d28ea] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0005cfec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001892c0 TLS:<nil>}
I1207 22:46:05.166530  427264 retry.go:31] will retry after 11.067733ms: Temporary Error: unexpected response code: 503
I1207 22:46:05.179534  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d1b52840-c2f9-4642-9e15-2a3b17a7e338] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0005d3a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000389b80 TLS:<nil>}
I1207 22:46:05.179578  427264 retry.go:31] will retry after 8.73299ms: Temporary Error: unexpected response code: 503
I1207 22:46:05.190512  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[293065ac-9d97-4a5d-8365-faa9c040c2fc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0016f8080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000544c80 TLS:<nil>}
I1207 22:46:05.190560  427264 retry.go:31] will retry after 15.309632ms: Temporary Error: unexpected response code: 503
I1207 22:46:05.208425  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d3dfe76f-d883-49cb-9838-830aec8428c0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0005d3b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000389cc0 TLS:<nil>}
I1207 22:46:05.208468  427264 retry.go:31] will retry after 28.1813ms: Temporary Error: unexpected response code: 503
I1207 22:46:05.239445  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a32d5caf-9da8-4982-b627-1632e4baf54e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0016f8180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000545040 TLS:<nil>}
I1207 22:46:05.239489  427264 retry.go:31] will retry after 61.265865ms: Temporary Error: unexpected response code: 503
I1207 22:46:05.303532  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d4992540-df53-4a35-8136-27ae24a36ae9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0016f8200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000389e00 TLS:<nil>}
I1207 22:46:05.303569  427264 retry.go:31] will retry after 88.988692ms: Temporary Error: unexpected response code: 503
I1207 22:46:05.395522  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c3edc020-eca8-4cb0-98af-d86c487025d9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0005d3d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c8000 TLS:<nil>}
I1207 22:46:05.395597  427264 retry.go:31] will retry after 135.898528ms: Temporary Error: unexpected response code: 503
I1207 22:46:05.534255  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aed2a9a7-0251-4e9e-b007-4509095776b7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0007ba480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000545180 TLS:<nil>}
I1207 22:46:05.534319  427264 retry.go:31] will retry after 219.910413ms: Temporary Error: unexpected response code: 503
I1207 22:46:05.757484  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[427c689b-fc0d-493e-86a0-f025d4a30127] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0005d3e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000189400 TLS:<nil>}
I1207 22:46:05.757547  427264 retry.go:31] will retry after 222.162197ms: Temporary Error: unexpected response code: 503
I1207 22:46:05.982440  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7b40dde8-fafb-4467-b6e5-0423c9ddf0cc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:05 GMT]] Body:0xc0007ba5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005452c0 TLS:<nil>}
I1207 22:46:05.982509  427264 retry.go:31] will retry after 351.873531ms: Temporary Error: unexpected response code: 503
I1207 22:46:06.337799  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ad5d5728-a833-4049-b5fc-b87c02b585a5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:06 GMT]] Body:0xc0016f82c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000189540 TLS:<nil>}
I1207 22:46:06.337860  427264 retry.go:31] will retry after 329.116023ms: Temporary Error: unexpected response code: 503
I1207 22:46:06.669891  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bb569423-5adc-4e86-8fdc-9c57fc1835f2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:06 GMT]] Body:0xc0007ba6c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c8140 TLS:<nil>}
I1207 22:46:06.669993  427264 retry.go:31] will retry after 438.488937ms: Temporary Error: unexpected response code: 503
I1207 22:46:07.111143  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[93a23fd4-eba0-4c15-8fe8-12e584844d13] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:07 GMT]] Body:0xc0016f83c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000189680 TLS:<nil>}
I1207 22:46:07.111219  427264 retry.go:31] will retry after 1.495134323s: Temporary Error: unexpected response code: 503
I1207 22:46:08.610159  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[19cfeee7-1921-4f64-b15a-adf2e5e24d99] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:08 GMT]] Body:0xc0007ba800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c83c0 TLS:<nil>}
I1207 22:46:08.610225  427264 retry.go:31] will retry after 1.766230519s: Temporary Error: unexpected response code: 503
I1207 22:46:10.379414  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d89977d6-995b-4ec0-b188-41426b9eb8f9] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:10 GMT]] Body:0xc0007ba940 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001897c0 TLS:<nil>}
I1207 22:46:10.379479  427264 retry.go:31] will retry after 2.031969845s: Temporary Error: unexpected response code: 503
I1207 22:46:12.415053  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d54e9ceb-a872-45cb-b6f6-ebd78943b947] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:12 GMT]] Body:0xc0014ea0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000189900 TLS:<nil>}
I1207 22:46:12.415155  427264 retry.go:31] will retry after 3.835546549s: Temporary Error: unexpected response code: 503
I1207 22:46:16.253583  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d9ad61aa-cdb7-4292-a5e2-50e30ca9328b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:16 GMT]] Body:0xc0016f8480 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000545400 TLS:<nil>}
I1207 22:46:16.253664  427264 retry.go:31] will retry after 6.319521096s: Temporary Error: unexpected response code: 503
I1207 22:46:22.579131  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[59e42656-1dda-4162-a4c2-a196026566c1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:22 GMT]] Body:0xc0014ea1c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c8500 TLS:<nil>}
I1207 22:46:22.579193  427264 retry.go:31] will retry after 9.548915604s: Temporary Error: unexpected response code: 503
I1207 22:46:32.132977  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a2a3d1cc-a8df-4d73-9c79-3f396ed06f4d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:32 GMT]] Body:0xc0014ea240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c8780 TLS:<nil>}
I1207 22:46:32.133051  427264 retry.go:31] will retry after 17.954551797s: Temporary Error: unexpected response code: 503
I1207 22:46:50.090351  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5f036ded-2299-4c33-960e-23ba9aa8c7dd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:46:50 GMT]] Body:0xc0016f8640 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c88c0 TLS:<nil>}
I1207 22:46:50.090441  427264 retry.go:31] will retry after 15.881523363s: Temporary Error: unexpected response code: 503
I1207 22:47:05.975223  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6c298665-7412-4f1a-840e-83ac1d8797cf] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:47:05 GMT]] Body:0xc0007bb300 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000545680 TLS:<nil>}
I1207 22:47:05.975284  427264 retry.go:31] will retry after 18.225927991s: Temporary Error: unexpected response code: 503
I1207 22:47:24.204275  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8006d325-e8c4-4b6a-8210-2cd31ac6c4a8] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:47:24 GMT]] Body:0xc0014ea340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c8a00 TLS:<nil>}
I1207 22:47:24.204351  427264 retry.go:31] will retry after 35.454423299s: Temporary Error: unexpected response code: 503
I1207 22:47:59.662258  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[49619e0a-8ad2-4e93-9b00-17b58009ce32] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:47:59 GMT]] Body:0xc0007bb3c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005457c0 TLS:<nil>}
I1207 22:47:59.662337  427264 retry.go:31] will retry after 55.822223413s: Temporary Error: unexpected response code: 503
I1207 22:48:55.487874  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[72893c93-26e2-4d05-a7a4-6f4c44e64cfc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:48:55 GMT]] Body:0xc0014ea0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000188280 TLS:<nil>}
I1207 22:48:55.487948  427264 retry.go:31] will retry after 45.757876666s: Temporary Error: unexpected response code: 503
I1207 22:49:41.249652  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8936f92c-d63d-4de4-b666-b6bc6eb9966b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:49:41 GMT]] Body:0xc000c1e2c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000545900 TLS:<nil>}
I1207 22:49:41.249757  427264 retry.go:31] will retry after 1m21.847926511s: Temporary Error: unexpected response code: 503
I1207 22:51:03.101018  427264 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[342780e7-be8d-42b7-a9ff-df5a2c4bc645] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:51:03 GMT]] Body:0xc0007ba0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001883c0 TLS:<nil>}
I1207 22:51:03.101114  427264 retry.go:31] will retry after 1m4.224817243s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-463278
helpers_test.go:243: (dbg) docker inspect functional-463278:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478",
	        "Created": "2025-12-07T22:44:12.758544687Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 412276,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T22:44:12.792640745Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/hosts",
	        "LogPath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478-json.log",
	        "Name": "/functional-463278",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-463278:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-463278",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478",
	                "LowerDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b-init/diff:/var/lib/docker/overlay2/3cf0cc8fa8911fedbf601d360c4398569b0f30cd394076654dad88848c4dafaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-463278",
	                "Source": "/var/lib/docker/volumes/functional-463278/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-463278",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-463278",
	                "name.minikube.sigs.k8s.io": "functional-463278",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "73fcdda28fbda839afd47005c2057d1600198ad8b4bf801d9a3ceddf616c1b58",
	            "SandboxKey": "/var/run/docker/netns/73fcdda28fbd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-463278": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "018e6fa20a7dd3d69ef98e40b8862683b0183989c49c03038371ca101d5bfc97",
	                    "EndpointID": "e99fd6022d692fa69afcf7bcf689b400a636b35ef2585d88efa596fe15564320",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "56:db:6f:c4:2e:40",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-463278",
	                        "5ffb24e50471"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-463278 -n functional-463278
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-463278 logs -n 25: (1.161235299s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                                 ARGS                                                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image     │ functional-463278 image ls                                                                                                                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh findmnt -T /mount-9p | grep 9p                                                                                                                  │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ image     │ functional-463278 image save kicbase/echo-server:functional-463278 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh -- ls -la /mount-9p                                                                                                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ image     │ functional-463278 image rm kicbase/echo-server:functional-463278 --alsologtostderr                                                                                    │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ image     │ functional-463278 image ls                                                                                                                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh sudo umount -f /mount-9p                                                                                                                        │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ image     │ functional-463278 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr                                       │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ mount     │ -p functional-463278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup399378081/001:/mount2 --alsologtostderr -v=1                                                     │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ mount     │ -p functional-463278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup399378081/001:/mount1 --alsologtostderr -v=1                                                     │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ mount     │ -p functional-463278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup399378081/001:/mount3 --alsologtostderr -v=1                                                     │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ ssh       │ functional-463278 ssh findmnt -T /mount1                                                                                                                              │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ image     │ functional-463278 image ls                                                                                                                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ image     │ functional-463278 image save --daemon kicbase/echo-server:functional-463278 --alsologtostderr                                                                         │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh sudo cat /etc/test/nested/copy/375863/hosts                                                                                                     │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh findmnt -T /mount1                                                                                                                              │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh echo hello                                                                                                                                      │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh findmnt -T /mount2                                                                                                                              │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh cat /etc/hostname                                                                                                                               │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh findmnt -T /mount3                                                                                                                              │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ tunnel    │ functional-463278 tunnel --alsologtostderr                                                                                                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ tunnel    │ functional-463278 tunnel --alsologtostderr                                                                                                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ mount     │ -p functional-463278 --kill=true                                                                                                                                      │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-463278 --alsologtostderr -v=1                                                                                                        │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ tunnel    │ functional-463278 tunnel --alsologtostderr                                                                                                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:45:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:45:57.091874  423436 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:45:57.091995  423436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:45:57.092005  423436 out.go:374] Setting ErrFile to fd 2...
	I1207 22:45:57.092011  423436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:45:57.092185  423436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 22:45:57.092601  423436 out.go:368] Setting JSON to false
	I1207 22:45:57.093579  423436 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5306,"bootTime":1765142251,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:45:57.093633  423436 start.go:143] virtualization: kvm guest
	I1207 22:45:57.095151  423436 out.go:179] * [functional-463278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:45:57.096384  423436 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:45:57.096396  423436 notify.go:221] Checking for updates...
	I1207 22:45:57.098316  423436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:45:57.099355  423436 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 22:45:57.100301  423436 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 22:45:57.101400  423436 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:45:57.102435  423436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:45:57.103770  423436 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 22:45:57.104272  423436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:45:57.126808  423436 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:45:57.126897  423436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:45:57.180398  423436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:62 SystemTime:2025-12-07 22:45:57.171174532 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:45:57.180563  423436 docker.go:319] overlay module found
	I1207 22:45:57.181969  423436 out.go:179] * Using the docker driver based on existing profile
	I1207 22:45:57.182881  423436 start.go:309] selected driver: docker
	I1207 22:45:57.182896  423436 start.go:927] validating driver "docker" against &{Name:functional-463278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-463278 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:45:57.183005  423436 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:45:57.183099  423436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:45:57.235635  423436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:72 SystemTime:2025-12-07 22:45:57.226815895 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:45:57.236268  423436 cni.go:84] Creating CNI manager for ""
	I1207 22:45:57.236345  423436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 22:45:57.236398  423436 start.go:353] cluster config:
	{Name:functional-463278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-463278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:45:57.238530  423436 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e75f0ea836e87       56cc512116c8f       5 minutes ago       Exited              mount-munger              0                   6198658580680       busybox-mount                               default
	2bb308b96eda4       a5f569d49a979       5 minutes ago       Running             kube-apiserver            0                   afb596781aa30       kube-apiserver-functional-463278            kube-system
	a4a323a1e2f64       01e8bacf0f500       5 minutes ago       Running             kube-controller-manager   2                   5eb66decae12e       kube-controller-manager-functional-463278   kube-system
	8818bd5eb1cbf       a3e246e9556e9       5 minutes ago       Running             etcd                      1                   25e3962e39ce3       etcd-functional-463278                      kube-system
	03e16e4fab102       01e8bacf0f500       5 minutes ago       Exited              kube-controller-manager   1                   5eb66decae12e       kube-controller-manager-functional-463278   kube-system
	b99f837de77a9       6e38f40d628db       5 minutes ago       Running             storage-provisioner       1                   88b5c757605a6       storage-provisioner                         kube-system
	2f92bec061655       88320b5498ff2       5 minutes ago       Running             kube-scheduler            1                   d5d9a4864e7cc       kube-scheduler-functional-463278            kube-system
	8ad918e337f8d       52546a367cc9e       5 minutes ago       Running             coredns                   1                   b1f6971889076       coredns-66bc5c9577-h8rkq                    kube-system
	c01638eb7603f       409467f978b4a       5 minutes ago       Running             kindnet-cni               1                   9fbdd61d5e1fa       kindnet-8r765                               kube-system
	de7501abd974a       8aa150647e88a       5 minutes ago       Running             kube-proxy                1                   a67aeb63180da       kube-proxy-k8s49                            kube-system
	c8106c646b2b5       52546a367cc9e       6 minutes ago       Exited              coredns                   0                   b1f6971889076       coredns-66bc5c9577-h8rkq                    kube-system
	f17929caca326       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       0                   88b5c757605a6       storage-provisioner                         kube-system
	1aebb420b7e16       409467f978b4a       6 minutes ago       Exited              kindnet-cni               0                   9fbdd61d5e1fa       kindnet-8r765                               kube-system
	241e1affb6fa4       8aa150647e88a       6 minutes ago       Exited              kube-proxy                0                   a67aeb63180da       kube-proxy-k8s49                            kube-system
	c029b7ba4e0eb       88320b5498ff2       6 minutes ago       Exited              kube-scheduler            0                   d5d9a4864e7cc       kube-scheduler-functional-463278            kube-system
	bbe9f6a088ec6       a3e246e9556e9       6 minutes ago       Exited              etcd                      0                   25e3962e39ce3       etcd-functional-463278                      kube-system
	
	
	==> containerd <==
	Dec 07 22:50:56 functional-463278 containerd[3808]: time="2025-12-07T22:50:56.790366280Z" level=info msg="container event discarded" container=619865858068029fc3d7311770a09a87039fe71beec75e48d54607e536c2a2d2 type=CONTAINER_STARTED_EVENT
	Dec 07 22:50:57 functional-463278 containerd[3808]: time="2025-12-07T22:50:57.432379832Z" level=info msg="container event discarded" container=e75f0ea836e871d2e96be7ca7f972756573bf9e52f1b5f150b60e8b28052bef3 type=CONTAINER_CREATED_EVENT
	Dec 07 22:50:57 functional-463278 containerd[3808]: time="2025-12-07T22:50:57.492670063Z" level=info msg="container event discarded" container=e75f0ea836e871d2e96be7ca7f972756573bf9e52f1b5f150b60e8b28052bef3 type=CONTAINER_STARTED_EVENT
	Dec 07 22:50:57 functional-463278 containerd[3808]: time="2025-12-07T22:50:57.544385864Z" level=info msg="container event discarded" container=e75f0ea836e871d2e96be7ca7f972756573bf9e52f1b5f150b60e8b28052bef3 type=CONTAINER_STOPPED_EVENT
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.549938219Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d26d291_e543_46cb_b3eb_fb46c80d9971.slice/cri-containerd-8ad918e337f8d1b8a5bf5bd69ce10cebeb1144cc6e19451beb4971b6e85eb0eb.scope/hugetlb.2MB.events\""
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.550034730Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d26d291_e543_46cb_b3eb_fb46c80d9971.slice/cri-containerd-8ad918e337f8d1b8a5bf5bd69ce10cebeb1144cc6e19451beb4971b6e85eb0eb.scope/hugetlb.1GB.events\""
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.550746659Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf470a8_cc57_40ba_8f7f_f530ce462a2e.slice/cri-containerd-b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c.scope/hugetlb.2MB.events\""
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.550851188Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf470a8_cc57_40ba_8f7f_f530ce462a2e.slice/cri-containerd-b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c.scope/hugetlb.1GB.events\""
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.551577014Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e9ebbd3d601e2f2313533693bca03e6.slice/cri-containerd-2f92bec0616550c97f8ee1a3211b8813d3424295e7164e828d1c5b432b8d79fe.scope/hugetlb.2MB.events\""
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.551664754Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e9ebbd3d601e2f2313533693bca03e6.slice/cri-containerd-2f92bec0616550c97f8ee1a3211b8813d3424295e7164e828d1c5b432b8d79fe.scope/hugetlb.1GB.events\""
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.552344267Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab88a548_17f8_47a7_a570_18c415f65178.slice/cri-containerd-de7501abd974adb785be3742251d21b68ca623f7b1fc179faa6032be7e09ebeb.scope/hugetlb.2MB.events\""
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.552414787Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab88a548_17f8_47a7_a570_18c415f65178.slice/cri-containerd-de7501abd974adb785be3742251d21b68ca623f7b1fc179faa6032be7e09ebeb.scope/hugetlb.1GB.events\""
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.553220634Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac5509e6a3d6be05ab3675ee8e30959.slice/cri-containerd-a4a323a1e2f645854da5df61fb2239f00b95df2847c639bb7e82e2a9aee68198.scope/hugetlb.2MB.events\""
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.553345916Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac5509e6a3d6be05ab3675ee8e30959.slice/cri-containerd-a4a323a1e2f645854da5df61fb2239f00b95df2847c639bb7e82e2a9aee68198.scope/hugetlb.1GB.events\""
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.554150089Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefebb8e185ecbbce9100dae8ce48b206.slice/cri-containerd-2bb308b96eda42ec0ad4cc021e353ed01f9914a9f38712d6fe8e6243d819e3db.scope/hugetlb.2MB.events\""
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.554269664Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefebb8e185ecbbce9100dae8ce48b206.slice/cri-containerd-2bb308b96eda42ec0ad4cc021e353ed01f9914a9f38712d6fe8e6243d819e3db.scope/hugetlb.1GB.events\""
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.555147591Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-podd60fdd96_54b9_4774_931c_7c2b63e717cd.slice/cri-containerd-c01638eb7603f7b8f2234eb7b4f686364cfd3fa038a503c53377d321b2d25138.scope/hugetlb.2MB.events\""
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.555257860Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-podd60fdd96_54b9_4774_931c_7c2b63e717cd.slice/cri-containerd-c01638eb7603f7b8f2234eb7b4f686364cfd3fa038a503c53377d321b2d25138.scope/hugetlb.1GB.events\""
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.555891740Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b0c90499e6f16c1e1ef709c99f74a4.slice/cri-containerd-8818bd5eb1cbf3deefd6eed443a17d18a8bf5ee74e292d80063167b747a29bbb.scope/hugetlb.2MB.events\""
	Dec 07 22:50:58 functional-463278 containerd[3808]: time="2025-12-07T22:50:58.555988020Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b0c90499e6f16c1e1ef709c99f74a4.slice/cri-containerd-8818bd5eb1cbf3deefd6eed443a17d18a8bf5ee74e292d80063167b747a29bbb.scope/hugetlb.1GB.events\""
	Dec 07 22:50:59 functional-463278 containerd[3808]: time="2025-12-07T22:50:59.087679593Z" level=info msg="container event discarded" container=619865858068029fc3d7311770a09a87039fe71beec75e48d54607e536c2a2d2 type=CONTAINER_STOPPED_EVENT
	Dec 07 22:51:04 functional-463278 containerd[3808]: time="2025-12-07T22:51:04.799922680Z" level=info msg="container event discarded" container=18a266d392b26b57ade1f25ec19f2d9d455698459a82cedff68c41438a2d0dc2 type=CONTAINER_CREATED_EVENT
	Dec 07 22:51:04 functional-463278 containerd[3808]: time="2025-12-07T22:51:04.800024988Z" level=info msg="container event discarded" container=18a266d392b26b57ade1f25ec19f2d9d455698459a82cedff68c41438a2d0dc2 type=CONTAINER_STARTED_EVENT
	Dec 07 22:51:04 functional-463278 containerd[3808]: time="2025-12-07T22:51:04.979351453Z" level=info msg="container event discarded" container=2deeb609ba1f7b3180dc2906b8e5d69e7a2988cf3821bea9b1e226f5f029011b type=CONTAINER_CREATED_EVENT
	Dec 07 22:51:04 functional-463278 containerd[3808]: time="2025-12-07T22:51:04.979393495Z" level=info msg="container event discarded" container=2deeb609ba1f7b3180dc2906b8e5d69e7a2988cf3821bea9b1e226f5f029011b type=CONTAINER_STARTED_EVENT
	
	
	==> coredns [8ad918e337f8d1b8a5bf5bd69ce10cebeb1144cc6e19451beb4971b6e85eb0eb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49423 - 38940 "HINFO IN 5983710415482786725.2619087190349043031. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.056989451s
	
	
	==> coredns [c8106c646b2b5fc16823350d043dc9c6ec16d210e799856486b7c76fe08c83b2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46018 - 48176 "HINFO IN 6865890663098146517.2279839593742268272. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.090558582s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-463278
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-463278
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=functional-463278
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T22_44_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 22:44:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-463278
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 22:50:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 22:50:06 +0000   Sun, 07 Dec 2025 22:44:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 22:50:06 +0000   Sun, 07 Dec 2025 22:44:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 22:50:06 +0000   Sun, 07 Dec 2025 22:44:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 22:50:06 +0000   Sun, 07 Dec 2025 22:44:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-463278
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                f5a83adb-14f5-4487-b0c9-2f0e171fa46e
	  Boot ID:                    60b18ead-4d86-44c7-8f9b-425eebb82278
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-g6tqf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 coredns-66bc5c9577-h8rkq                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m30s
	  kube-system                 etcd-functional-463278                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m36s
	  kube-system                 kindnet-8r765                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m31s
	  kube-system                 kube-apiserver-functional-463278              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-controller-manager-functional-463278     200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 kube-proxy-k8s49                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-scheduler-functional-463278              100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4lmsb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xmmkt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m29s                  kube-proxy       
	  Normal  Starting                 5m55s                  kube-proxy       
	  Normal  Starting                 6m37s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m36s                  kubelet          Node functional-463278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s                  kubelet          Node functional-463278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s                  kubelet          Node functional-463278 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m32s                  node-controller  Node functional-463278 event: Registered Node functional-463278 in Controller
	  Normal  NodeReady                6m19s                  kubelet          Node functional-463278 status is now: NodeReady
	  Normal  Starting                 5m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m38s (x8 over 5m38s)  kubelet          Node functional-463278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m38s (x8 over 5m38s)  kubelet          Node functional-463278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m38s (x7 over 5m38s)  kubelet          Node functional-463278 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m32s                  node-controller  Node functional-463278 event: Registered Node functional-463278 in Controller
	
	
	==> dmesg <==
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 21:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 9d 47 68 73 f8 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 51 85 40 99 2c 08 06
	[Dec 7 22:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +0.003775] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 fe e4 4c 66 38 08 06
	[ +24.208165] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	[Dec 7 22:26] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 26 10 e0 03 10 08 06
	[  +0.000351] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +6.093825] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 95 3b ef b3 e0 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e ac 25 ad 6e 00 08 06
	[ +21.488023] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e e7 41 9a 21 38 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	
	
	==> etcd [8818bd5eb1cbf3deefd6eed443a17d18a8bf5ee74e292d80063167b747a29bbb] <==
	{"level":"warn","ts":"2025-12-07T22:45:29.999927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.005859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.014795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.021623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.027533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.033502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.039545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.046098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.059538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.065500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.071745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.077937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.084034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.090594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.097141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.104352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.110852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.117485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.124202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.130747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.137443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.153632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.160669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.168476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.212502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48982","server-name":"","error":"EOF"}
	
	
	==> etcd [bbe9f6a088ec6daf4376fac032c87e27c49c98b3182d7279277f7dba7f36d9e7] <==
	{"level":"warn","ts":"2025-12-07T22:44:26.429250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.436264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.442173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.452972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.459855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.466810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.514205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35976","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T22:45:09.161759Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-07T22:45:09.161847Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-463278","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-07T22:45:09.162016Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T22:45:16.163516Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T22:45:16.163619Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T22:45:16.163658Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-07T22:45:16.163688Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-07T22:45:16.163688Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163818Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163836Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163869Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163875Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-07T22:45:16.163881Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-07T22:45:16.163885Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T22:45:16.166508Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-07T22:45:16.166565Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T22:45:16.166592Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-07T22:45:16.166598Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-463278","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:51:05 up  1:33,  0 user,  load average: 0.27, 0.48, 0.92
	Linux functional-463278 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1aebb420b7e165d65642a376d6c5580ab3f33735dedae31a4ccec446fd1ecdb3] <==
	I1207 22:44:35.954967       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 22:44:35.955187       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1207 22:44:35.955345       1 main.go:148] setting mtu 1500 for CNI 
	I1207 22:44:35.955368       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 22:44:35.955391       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T22:44:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 22:44:36.182994       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 22:44:36.183470       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 22:44:36.183518       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 22:44:36.183832       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 22:44:36.484277       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 22:44:36.484299       1 metrics.go:72] Registering metrics
	I1207 22:44:36.484346       1 controller.go:711] "Syncing nftables rules"
	I1207 22:44:46.184838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:44:46.184893       1 main.go:301] handling current node
	I1207 22:44:56.191643       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:44:56.191687       1 main.go:301] handling current node
	I1207 22:45:06.188919       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:45:06.188951       1 main.go:301] handling current node
	
	
	==> kindnet [c01638eb7603f7b8f2234eb7b4f686364cfd3fa038a503c53377d321b2d25138] <==
	I1207 22:48:59.591434       1 main.go:301] handling current node
	I1207 22:49:09.591353       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:49:09.591408       1 main.go:301] handling current node
	I1207 22:49:19.592178       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:49:19.592209       1 main.go:301] handling current node
	I1207 22:49:29.592146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:49:29.592177       1 main.go:301] handling current node
	I1207 22:49:39.592869       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:49:39.592935       1 main.go:301] handling current node
	I1207 22:49:49.600139       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:49:49.600171       1 main.go:301] handling current node
	I1207 22:49:59.592532       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:49:59.592572       1 main.go:301] handling current node
	I1207 22:50:09.592839       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:50:09.592879       1 main.go:301] handling current node
	I1207 22:50:19.599889       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:50:19.599923       1 main.go:301] handling current node
	I1207 22:50:29.592811       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:50:29.592844       1 main.go:301] handling current node
	I1207 22:50:39.592817       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:50:39.592854       1 main.go:301] handling current node
	I1207 22:50:49.600092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:50:49.600125       1 main.go:301] handling current node
	I1207 22:50:59.592809       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:50:59.592841       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2bb308b96eda42ec0ad4cc021e353ed01f9914a9f38712d6fe8e6243d819e3db] <==
	I1207 22:45:30.638245       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1207 22:45:30.638174       1 cache.go:39] Caches are synced for autoregister controller
	I1207 22:45:30.638442       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1207 22:45:30.639727       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1207 22:45:30.639750       1 policy_source.go:240] refreshing policies
	I1207 22:45:30.642921       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 22:45:30.676523       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 22:45:30.683392       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 22:45:30.877540       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:30.877540       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:31.540362       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1207 22:45:31.745456       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1207 22:45:31.746522       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 22:45:31.750270       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 22:45:32.224588       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 22:45:32.307847       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 22:45:32.351106       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 22:45:32.356452       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 22:45:39.118928       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 22:45:49.974663       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.22.34"}
	I1207 22:45:54.183552       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.226.250"}
	I1207 22:46:04.555257       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.91.117"}
	I1207 22:46:04.940065       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 22:46:05.042815       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.100.228"}
	I1207 22:46:05.053530       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.49.67"}
	
	
	==> kube-controller-manager [03e16e4fab102ebca442c6bfb3bbf1506144ba4ce15c3842a23b812ecedd698e] <==
	I1207 22:45:18.193158       1 shared_informer.go:349] "Waiting for caches to sync" controller="taint"
	I1207 22:45:18.195271       1 controllermanager.go:781] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1207 22:45:18.195294       1 controllermanager.go:759] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1207 22:45:18.195400       1 attach_detach_controller.go:336] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1207 22:45:18.195417       1 shared_informer.go:349] "Waiting for caches to sync" controller="attach detach"
	I1207 22:45:18.197804       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1207 22:45:18.197829       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-serving"
	I1207 22:45:18.197849       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.198255       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1207 22:45:18.198279       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-client"
	I1207 22:45:18.198288       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.198813       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1207 22:45:18.198845       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kube-apiserver-client"
	I1207 22:45:18.198866       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.199293       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1207 22:45:18.199316       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="podcertificaterequest-cleaner-controller" requiredFeatureGates=["PodCertificateRequest"]
	I1207 22:45:18.199490       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1207 22:45:18.199502       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-legacy-unknown"
	I1207 22:45:18.199521       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.454250       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1207 22:45:18.454300       1 controllermanager.go:781] "Started controller" controller="node-ipam-controller"
	I1207 22:45:18.454309       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="node-route-controller"
	I1207 22:45:18.454514       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1207 22:45:18.454532       1 shared_informer.go:349] "Waiting for caches to sync" controller="node"
	F1207 22:45:18.454685       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/pv-protection-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-controller-manager [a4a323a1e2f645854da5df61fb2239f00b95df2847c639bb7e82e2a9aee68198] <==
	I1207 22:45:33.954722       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1207 22:45:33.981269       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1207 22:45:33.981349       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1207 22:45:33.981379       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1207 22:45:33.981353       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 22:45:33.981423       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1207 22:45:33.981428       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1207 22:45:33.981439       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1207 22:45:33.981499       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1207 22:45:33.981570       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1207 22:45:33.982774       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1207 22:45:33.982803       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1207 22:45:33.982858       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1207 22:45:33.982877       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1207 22:45:33.982949       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1207 22:45:33.984738       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1207 22:45:33.985928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 22:45:33.988170       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1207 22:45:34.004392       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1207 22:46:04.986011       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.990408       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.994096       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.994195       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.997536       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:05.002933       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [241e1affb6fa46a9506a75655e9de27f3e048cac663b0a5a06aef503e652daed] <==
	I1207 22:44:35.482903       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:44:35.541247       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 22:44:35.642069       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 22:44:35.642115       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:44:35.642234       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:44:35.661982       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:44:35.662032       1 server_linux.go:132] "Using iptables Proxier"
	I1207 22:44:35.667042       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:44:35.667374       1 server.go:527] "Version info" version="v1.34.2"
	I1207 22:44:35.667400       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:44:35.668618       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:44:35.668651       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:44:35.668670       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:44:35.668742       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:44:35.668749       1 config.go:309] "Starting node config controller"
	I1207 22:44:35.668757       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:44:35.668764       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:44:35.668748       1 config.go:200] "Starting service config controller"
	I1207 22:44:35.668773       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:44:35.768866       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:44:35.768912       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:44:35.769022       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [de7501abd974adb785be3742251d21b68ca623f7b1fc179faa6032be7e09ebeb] <==
	I1207 22:45:09.306366       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 22:45:09.407512       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 22:45:09.407557       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:45:09.408004       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:45:09.429174       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:45:09.429227       1 server_linux.go:132] "Using iptables Proxier"
	I1207 22:45:09.434622       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:45:09.434936       1 server.go:527] "Version info" version="v1.34.2"
	I1207 22:45:09.434973       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:45:09.436217       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:45:09.436231       1 config.go:200] "Starting service config controller"
	I1207 22:45:09.436245       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:45:09.436251       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:45:09.436274       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:45:09.436289       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:45:09.436398       1 config.go:309] "Starting node config controller"
	I1207 22:45:09.436416       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:45:09.436425       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:45:09.537119       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:45:09.537175       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:45:09.537193       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	E1207 22:45:30.585918       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:45:30.586265       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1207 22:45:30.586266       1 reflector.go:205] "Failed to watch" err="nodes \"functional-463278\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:45:30.586264       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	
	
	==> kube-scheduler [2f92bec0616550c97f8ee1a3211b8813d3424295e7164e828d1c5b432b8d79fe] <==
	I1207 22:45:18.203451       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 22:45:18.203468       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 22:45:18.203454       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1207 22:45:18.203876       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 22:45:18.203919       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 22:45:18.304448       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1207 22:45:18.304470       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 22:45:18.304579       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1207 22:45:30.559615       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:45:30.559636       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:45:30.566816       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 22:45:30.566860       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 22:45:30.566880       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1207 22:45:30.566895       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:45:30.566929       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:45:30.566953       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1207 22:45:30.567950       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 22:45:30.568153       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:45:30.568339       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1207 22:45:30.569325       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:45:30.569581       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 22:45:30.569607       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:45:30.577884       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1207 22:45:30.581596       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1207 22:45:30.582317       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	
	
	==> kube-scheduler [c029b7ba4e0eb3389fbfbfbaf6190b61ae6eebcde41377e62c6e973e8138ad72] <==
	E1207 22:44:26.890579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:44:26.890538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 22:44:26.890639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:44:26.890644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1207 22:44:26.890681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:44:26.890685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 22:44:26.890794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:44:27.715111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 22:44:27.728196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:44:27.769160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 22:44:27.774090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 22:44:27.792212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:44:27.900910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:44:27.908824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1207 22:44:27.921774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:44:27.971171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:44:28.036376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:44:28.098446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1207 22:44:31.286992       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:45:09.024037       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1207 22:45:09.024135       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:45:09.024235       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1207 22:45:09.024258       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1207 22:45:09.024272       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1207 22:45:09.024292       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 07 22:49:58 functional-463278 kubelet[4931]: E1207 22:49:58.892514    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6fc1a713-e1a4-4403-94de-bcd12759e727"
	Dec 07 22:49:58 functional-463278 kubelet[4931]: E1207 22:49:58.893169    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="a6ea4e44-3527-4f0c-9cae-328ed5b798c6"
	Dec 07 22:50:06 functional-463278 kubelet[4931]: E1207 22:50:06.892684    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4lmsb" podUID="d124a65e-90c9-4e59-9b92-97286
1b8b41e"
	Dec 07 22:50:10 functional-463278 kubelet[4931]: E1207 22:50:10.892066    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6fc1a713-e1a4-4403-94de-bcd12759e727"
	Dec 07 22:50:11 functional-463278 kubelet[4931]: E1207 22:50:11.892301    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-g6tqf" podUID="868ae89c-0be5-4a7d-8bfe-31984dcc4024"
	Dec 07 22:50:11 functional-463278 kubelet[4931]: E1207 22:50:11.893287    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xmmkt" podUID="202eaeac-4520-44e8-989e-8b210bdf6bbf"
	Dec 07 22:50:11 functional-463278 kubelet[4931]: E1207 22:50:11.893359    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="a6ea4e44-3527-4f0c-9cae-328ed5b798c6"
	Dec 07 22:50:20 functional-463278 kubelet[4931]: E1207 22:50:20.892836    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4lmsb" podUID="d124a65e-90c9-4e59-9b92-97286
1b8b41e"
	Dec 07 22:50:21 functional-463278 kubelet[4931]: E1207 22:50:21.892347    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6fc1a713-e1a4-4403-94de-bcd12759e727"
	Dec 07 22:50:22 functional-463278 kubelet[4931]: E1207 22:50:22.892673    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xmmkt" podUID="202eaeac-4520-44e8-989e-8b210bdf6bbf"
	Dec 07 22:50:23 functional-463278 kubelet[4931]: E1207 22:50:23.893094    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="a6ea4e44-3527-4f0c-9cae-328ed5b798c6"
	Dec 07 22:50:25 functional-463278 kubelet[4931]: E1207 22:50:25.892123    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-g6tqf" podUID="868ae89c-0be5-4a7d-8bfe-31984dcc4024"
	Dec 07 22:50:35 functional-463278 kubelet[4931]: E1207 22:50:35.892661    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4lmsb" podUID="d124a65e-90c9-4e59-9b92-97286
1b8b41e"
	Dec 07 22:50:36 functional-463278 kubelet[4931]: E1207 22:50:36.892476    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6fc1a713-e1a4-4403-94de-bcd12759e727"
	Dec 07 22:50:37 functional-463278 kubelet[4931]: E1207 22:50:37.895581    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="a6ea4e44-3527-4f0c-9cae-328ed5b798c6"
	Dec 07 22:50:37 functional-463278 kubelet[4931]: E1207 22:50:37.895668    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xmmkt" podUID="202eaeac-4520-44e8-989e-8b210bdf6bbf"
	Dec 07 22:50:39 functional-463278 kubelet[4931]: E1207 22:50:39.892242    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-g6tqf" podUID="868ae89c-0be5-4a7d-8bfe-31984dcc4024"
	Dec 07 22:50:48 functional-463278 kubelet[4931]: E1207 22:50:48.892550    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4lmsb" podUID="d124a65e-90c9-4e59-9b92-97286
1b8b41e"
	Dec 07 22:50:49 functional-463278 kubelet[4931]: E1207 22:50:49.892685    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="a6ea4e44-3527-4f0c-9cae-328ed5b798c6"
	Dec 07 22:50:50 functional-463278 kubelet[4931]: E1207 22:50:50.892011    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6fc1a713-e1a4-4403-94de-bcd12759e727"
	Dec 07 22:50:52 functional-463278 kubelet[4931]: E1207 22:50:52.892821    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xmmkt" podUID="202eaeac-4520-44e8-989e-8b210bdf6bbf"
	Dec 07 22:50:53 functional-463278 kubelet[4931]: E1207 22:50:53.891844    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-g6tqf" podUID="868ae89c-0be5-4a7d-8bfe-31984dcc4024"
	Dec 07 22:51:00 functional-463278 kubelet[4931]: E1207 22:51:00.892780    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="a6ea4e44-3527-4f0c-9cae-328ed5b798c6"
	Dec 07 22:51:03 functional-463278 kubelet[4931]: E1207 22:51:03.891940    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6fc1a713-e1a4-4403-94de-bcd12759e727"
	Dec 07 22:51:03 functional-463278 kubelet[4931]: E1207 22:51:03.892443    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4lmsb" podUID="d124a65e-90c9-4e59-9b92-97286
1b8b41e"
	
	
	==> storage-provisioner [b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c] <==
	W1207 22:50:40.258527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:42.261624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:42.266204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:44.268393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:44.271945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:46.274892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:46.278403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:48.280645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:48.284315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:50.287822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:50.291807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:52.294062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:52.298391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:54.301087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:54.304683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:56.307230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:56.311579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:58.313884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:50:58.318554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:00.321270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:00.325021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:02.327981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:02.332773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:04.335739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:04.339442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f17929caca3262a3a1b351d4290ad2ef17b04e0326dd1bbaaaad3c4f6ecc1761] <==
	W1207 22:44:46.734205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:46.737325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 22:44:46.832545       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-463278_fdf61bc9-6000-426f-adc4-cbf24786c29e!
	W1207 22:44:48.739998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:48.743651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:50.747390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:50.751564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:52.754931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:52.758942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:54.761505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:54.766768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:56.770510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:56.774194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:58.777335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:58.782858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:00.786501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:00.790145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:02.793348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:02.798083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:04.800761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:04.804484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:06.807470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:06.811061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.814264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.818406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-463278 -n functional-463278
helpers_test.go:269: (dbg) Run:  kubectl --context functional-463278 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-g6tqf nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-463278 describe pod busybox-mount hello-node-75c85bcc94-g6tqf nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-463278 describe pod busybox-mount hello-node-75c85bcc94-g6tqf nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt: exit status 1 (79.413961ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:56 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  containerd://e75f0ea836e871d2e96be7ca7f972756573bf9e52f1b5f150b60e8b28052bef3
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 07 Dec 2025 22:45:57 +0000
	      Finished:     Sun, 07 Dec 2025 22:45:57 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nlmhb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-nlmhb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m9s   default-scheduler  Successfully assigned default/busybox-mount to functional-463278
	  Normal  Pulling    5m10s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m9s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 625ms (625ms including waiting). Image size: 2395207 bytes.
	  Normal  Created    5m9s   kubelet            Created container: mount-munger
	  Normal  Started    5m9s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-g6tqf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:54 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfpqk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tfpqk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m11s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-g6tqf to functional-463278
	  Warning  Failed     3m36s (x2 over 5m11s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  2m9s (x5 over 5m12s)  kubelet  Pulling image "kicbase/echo-server"
	  Warning  Failed   2m8s (x5 over 5m11s)  kubelet  Error: ErrImagePull
	  Warning  Failed   2m8s (x3 over 4m57s)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff  1s (x21 over 5m11s)  kubelet  Back-off pulling image "kicbase/echo-server"
	  Warning  Failed   1s (x21 over 5m11s)  kubelet  Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:46:04 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkl8s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fkl8s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m1s                 default-scheduler  Successfully assigned default/nginx-svc to functional-463278
	  Normal   Pulling    2m2s (x5 over 5m2s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m1s (x5 over 5m)    kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   2m1s (x5 over 5m)     kubelet  Error: ErrImagePull
	  Warning  Failed   55s (x15 over 4m59s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  6s (x19 over 4m59s)   kubelet  Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:46:04 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jmx4b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-jmx4b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m1s                 default-scheduler  Successfully assigned default/sp-pod to functional-463278
	  Normal   Pulling    2m6s (x5 over 5m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m5s (x5 over 5m1s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   2m5s (x5 over 5m1s)  kubelet  Error: ErrImagePull
	  Warning  Failed   56s (x15 over 5m)    kubelet  Error: ImagePullBackOff
	  Normal   BackOff  3s (x19 over 5m)     kubelet  Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-4lmsb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xmmkt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-463278 describe pod busybox-mount hello-node-75c85bcc94-g6tqf nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-463278 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-463278 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-h4vhx" [6311e70b-1eaf-4894-8481-b6038190b988] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-463278 -n functional-463278
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-07 23:01:57.394312341 +0000 UTC m=+1857.546978433
functional_test.go:1645: (dbg) Run:  kubectl --context functional-463278 describe po hello-node-connect-7d85dfc575-h4vhx -n default
functional_test.go:1645: (dbg) kubectl --context functional-463278 describe po hello-node-connect-7d85dfc575-h4vhx -n default:
Name:             hello-node-connect-7d85dfc575-h4vhx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-463278/192.168.49.2
Start Time:       Sun, 07 Dec 2025 22:51:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l7v52 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-l7v52:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-h4vhx to functional-463278
Normal   Pulling    7m12s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m11s (x5 over 9m59s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   7m11s (x5 over 9m59s)   kubelet  Error: ErrImagePull
Warning  Failed   4m55s (x20 over 9m59s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m40s (x21 over 9m59s)  kubelet  Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-463278 logs hello-node-connect-7d85dfc575-h4vhx -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-463278 logs hello-node-connect-7d85dfc575-h4vhx -n default: exit status 1 (59.771129ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-h4vhx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-463278 logs hello-node-connect-7d85dfc575-h4vhx -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-463278 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-h4vhx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-463278/192.168.49.2
Start Time:       Sun, 07 Dec 2025 22:51:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l7v52 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-l7v52:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-h4vhx to functional-463278
Normal   Pulling    7m12s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m11s (x5 over 9m59s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   7m11s (x5 over 9m59s)   kubelet  Error: ErrImagePull
Warning  Failed   4m55s (x20 over 9m59s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m40s (x21 over 9m59s)  kubelet  Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-463278 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-463278 logs -l app=hello-node-connect: exit status 1 (58.96021ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-h4vhx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-463278 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-463278 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.24.204
IPs:                      10.110.24.204
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30121/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-463278
helpers_test.go:243: (dbg) docker inspect functional-463278:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478",
	        "Created": "2025-12-07T22:44:12.758544687Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 412276,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T22:44:12.792640745Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/hosts",
	        "LogPath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478-json.log",
	        "Name": "/functional-463278",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-463278:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-463278",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478",
	                "LowerDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b-init/diff:/var/lib/docker/overlay2/3cf0cc8fa8911fedbf601d360c4398569b0f30cd394076654dad88848c4dafaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-463278",
	                "Source": "/var/lib/docker/volumes/functional-463278/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-463278",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-463278",
	                "name.minikube.sigs.k8s.io": "functional-463278",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "73fcdda28fbda839afd47005c2057d1600198ad8b4bf801d9a3ceddf616c1b58",
	            "SandboxKey": "/var/run/docker/netns/73fcdda28fbd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-463278": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "018e6fa20a7dd3d69ef98e40b8862683b0183989c49c03038371ca101d5bfc97",
	                    "EndpointID": "e99fd6022d692fa69afcf7bcf689b400a636b35ef2585d88efa596fe15564320",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "56:db:6f:c4:2e:40",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-463278",
	                        "5ffb24e50471"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-463278 -n functional-463278
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-463278 logs -n 25: (1.164152613s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-463278 ssh findmnt -T /mount2                                                               │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh            │ functional-463278 ssh cat /etc/hostname                                                                │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh            │ functional-463278 ssh findmnt -T /mount3                                                               │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ tunnel         │ functional-463278 tunnel --alsologtostderr                                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ tunnel         │ functional-463278 tunnel --alsologtostderr                                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ mount          │ -p functional-463278 --kill=true                                                                       │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-463278 --alsologtostderr -v=1                                         │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ tunnel         │ functional-463278 tunnel --alsologtostderr                                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ addons         │ functional-463278 addons list                                                                          │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ addons         │ functional-463278 addons list -o json                                                                  │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ update-context │ functional-463278 update-context --alsologtostderr -v=2                                                │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ update-context │ functional-463278 update-context --alsologtostderr -v=2                                                │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ update-context │ functional-463278 update-context --alsologtostderr -v=2                                                │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-463278 image ls --format short --alsologtostderr                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-463278 image ls --format yaml --alsologtostderr                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh            │ functional-463278 ssh pgrep buildkitd                                                                  │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ image          │ functional-463278 image build -t localhost/my-image:functional-463278 testdata/build --alsologtostderr │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-463278 image ls                                                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-463278 image ls --format json --alsologtostderr                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-463278 image ls --format table --alsologtostderr                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ service        │ functional-463278 service list                                                                         │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │ 07 Dec 25 22:55 UTC │
	│ service        │ functional-463278 service list -o json                                                                 │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │ 07 Dec 25 22:55 UTC │
	│ service        │ functional-463278 service --namespace=default --https --url hello-node                                 │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │                     │
	│ service        │ functional-463278 service hello-node --url --format={{.IP}}                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │                     │
	│ service        │ functional-463278 service hello-node --url                                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:45:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:45:57.091874  423436 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:45:57.091995  423436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:45:57.092005  423436 out.go:374] Setting ErrFile to fd 2...
	I1207 22:45:57.092011  423436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:45:57.092185  423436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 22:45:57.092601  423436 out.go:368] Setting JSON to false
	I1207 22:45:57.093579  423436 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5306,"bootTime":1765142251,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:45:57.093633  423436 start.go:143] virtualization: kvm guest
	I1207 22:45:57.095151  423436 out.go:179] * [functional-463278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:45:57.096384  423436 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:45:57.096396  423436 notify.go:221] Checking for updates...
	I1207 22:45:57.098316  423436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:45:57.099355  423436 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 22:45:57.100301  423436 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 22:45:57.101400  423436 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:45:57.102435  423436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:45:57.103770  423436 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 22:45:57.104272  423436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:45:57.126808  423436 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:45:57.126897  423436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:45:57.180398  423436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:62 SystemTime:2025-12-07 22:45:57.171174532 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:45:57.180563  423436 docker.go:319] overlay module found
	I1207 22:45:57.181969  423436 out.go:179] * Using the docker driver based on existing profile
	I1207 22:45:57.182881  423436 start.go:309] selected driver: docker
	I1207 22:45:57.182896  423436 start.go:927] validating driver "docker" against &{Name:functional-463278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-463278 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:45:57.183005  423436 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:45:57.183099  423436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:45:57.235635  423436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:72 SystemTime:2025-12-07 22:45:57.226815895 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:45:57.236268  423436 cni.go:84] Creating CNI manager for ""
	I1207 22:45:57.236345  423436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 22:45:57.236398  423436 start.go:353] cluster config:
	{Name:functional-463278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-463278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:45:57.238530  423436 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e75f0ea836e87       56cc512116c8f       16 minutes ago      Exited              mount-munger              0                   6198658580680       busybox-mount                               default
	2bb308b96eda4       a5f569d49a979       16 minutes ago      Running             kube-apiserver            0                   afb596781aa30       kube-apiserver-functional-463278            kube-system
	a4a323a1e2f64       01e8bacf0f500       16 minutes ago      Running             kube-controller-manager   2                   5eb66decae12e       kube-controller-manager-functional-463278   kube-system
	8818bd5eb1cbf       a3e246e9556e9       16 minutes ago      Running             etcd                      1                   25e3962e39ce3       etcd-functional-463278                      kube-system
	03e16e4fab102       01e8bacf0f500       16 minutes ago      Exited              kube-controller-manager   1                   5eb66decae12e       kube-controller-manager-functional-463278   kube-system
	b99f837de77a9       6e38f40d628db       16 minutes ago      Running             storage-provisioner       1                   88b5c757605a6       storage-provisioner                         kube-system
	2f92bec061655       88320b5498ff2       16 minutes ago      Running             kube-scheduler            1                   d5d9a4864e7cc       kube-scheduler-functional-463278            kube-system
	8ad918e337f8d       52546a367cc9e       16 minutes ago      Running             coredns                   1                   b1f6971889076       coredns-66bc5c9577-h8rkq                    kube-system
	c01638eb7603f       409467f978b4a       16 minutes ago      Running             kindnet-cni               1                   9fbdd61d5e1fa       kindnet-8r765                               kube-system
	de7501abd974a       8aa150647e88a       16 minutes ago      Running             kube-proxy                1                   a67aeb63180da       kube-proxy-k8s49                            kube-system
	c8106c646b2b5       52546a367cc9e       17 minutes ago      Exited              coredns                   0                   b1f6971889076       coredns-66bc5c9577-h8rkq                    kube-system
	f17929caca326       6e38f40d628db       17 minutes ago      Exited              storage-provisioner       0                   88b5c757605a6       storage-provisioner                         kube-system
	1aebb420b7e16       409467f978b4a       17 minutes ago      Exited              kindnet-cni               0                   9fbdd61d5e1fa       kindnet-8r765                               kube-system
	241e1affb6fa4       8aa150647e88a       17 minutes ago      Exited              kube-proxy                0                   a67aeb63180da       kube-proxy-k8s49                            kube-system
	c029b7ba4e0eb       88320b5498ff2       17 minutes ago      Exited              kube-scheduler            0                   d5d9a4864e7cc       kube-scheduler-functional-463278            kube-system
	bbe9f6a088ec6       a3e246e9556e9       17 minutes ago      Exited              etcd                      0                   25e3962e39ce3       etcd-functional-463278                      kube-system
	
	
	==> containerd <==
	Dec 07 23:01:39 functional-463278 containerd[3808]: time="2025-12-07T23:01:39.806128704Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b0c90499e6f16c1e1ef709c99f74a4.slice/cri-containerd-8818bd5eb1cbf3deefd6eed443a17d18a8bf5ee74e292d80063167b747a29bbb.scope/hugetlb.1GB.events\""
	Dec 07 23:01:39 functional-463278 containerd[3808]: time="2025-12-07T23:01:39.806833842Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d26d291_e543_46cb_b3eb_fb46c80d9971.slice/cri-containerd-8ad918e337f8d1b8a5bf5bd69ce10cebeb1144cc6e19451beb4971b6e85eb0eb.scope/hugetlb.2MB.events\""
	Dec 07 23:01:39 functional-463278 containerd[3808]: time="2025-12-07T23:01:39.806946696Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d26d291_e543_46cb_b3eb_fb46c80d9971.slice/cri-containerd-8ad918e337f8d1b8a5bf5bd69ce10cebeb1144cc6e19451beb4971b6e85eb0eb.scope/hugetlb.1GB.events\""
	Dec 07 23:01:39 functional-463278 containerd[3808]: time="2025-12-07T23:01:39.807571787Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf470a8_cc57_40ba_8f7f_f530ce462a2e.slice/cri-containerd-b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c.scope/hugetlb.2MB.events\""
	Dec 07 23:01:39 functional-463278 containerd[3808]: time="2025-12-07T23:01:39.807642694Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf470a8_cc57_40ba_8f7f_f530ce462a2e.slice/cri-containerd-b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c.scope/hugetlb.1GB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.820530632Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d26d291_e543_46cb_b3eb_fb46c80d9971.slice/cri-containerd-8ad918e337f8d1b8a5bf5bd69ce10cebeb1144cc6e19451beb4971b6e85eb0eb.scope/hugetlb.2MB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.820638748Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d26d291_e543_46cb_b3eb_fb46c80d9971.slice/cri-containerd-8ad918e337f8d1b8a5bf5bd69ce10cebeb1144cc6e19451beb4971b6e85eb0eb.scope/hugetlb.1GB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.821535212Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf470a8_cc57_40ba_8f7f_f530ce462a2e.slice/cri-containerd-b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c.scope/hugetlb.2MB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.821669952Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf470a8_cc57_40ba_8f7f_f530ce462a2e.slice/cri-containerd-b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c.scope/hugetlb.1GB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.822335177Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e9ebbd3d601e2f2313533693bca03e6.slice/cri-containerd-2f92bec0616550c97f8ee1a3211b8813d3424295e7164e828d1c5b432b8d79fe.scope/hugetlb.2MB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.822411934Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e9ebbd3d601e2f2313533693bca03e6.slice/cri-containerd-2f92bec0616550c97f8ee1a3211b8813d3424295e7164e828d1c5b432b8d79fe.scope/hugetlb.1GB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.823065408Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab88a548_17f8_47a7_a570_18c415f65178.slice/cri-containerd-de7501abd974adb785be3742251d21b68ca623f7b1fc179faa6032be7e09ebeb.scope/hugetlb.2MB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.823155154Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab88a548_17f8_47a7_a570_18c415f65178.slice/cri-containerd-de7501abd974adb785be3742251d21b68ca623f7b1fc179faa6032be7e09ebeb.scope/hugetlb.1GB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.823755665Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac5509e6a3d6be05ab3675ee8e30959.slice/cri-containerd-a4a323a1e2f645854da5df61fb2239f00b95df2847c639bb7e82e2a9aee68198.scope/hugetlb.2MB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.823843914Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac5509e6a3d6be05ab3675ee8e30959.slice/cri-containerd-a4a323a1e2f645854da5df61fb2239f00b95df2847c639bb7e82e2a9aee68198.scope/hugetlb.1GB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.824607886Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefebb8e185ecbbce9100dae8ce48b206.slice/cri-containerd-2bb308b96eda42ec0ad4cc021e353ed01f9914a9f38712d6fe8e6243d819e3db.scope/hugetlb.2MB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.824841178Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefebb8e185ecbbce9100dae8ce48b206.slice/cri-containerd-2bb308b96eda42ec0ad4cc021e353ed01f9914a9f38712d6fe8e6243d819e3db.scope/hugetlb.1GB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.825483426Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-podd60fdd96_54b9_4774_931c_7c2b63e717cd.slice/cri-containerd-c01638eb7603f7b8f2234eb7b4f686364cfd3fa038a503c53377d321b2d25138.scope/hugetlb.2MB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.825555074Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-podd60fdd96_54b9_4774_931c_7c2b63e717cd.slice/cri-containerd-c01638eb7603f7b8f2234eb7b4f686364cfd3fa038a503c53377d321b2d25138.scope/hugetlb.1GB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.826198536Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b0c90499e6f16c1e1ef709c99f74a4.slice/cri-containerd-8818bd5eb1cbf3deefd6eed443a17d18a8bf5ee74e292d80063167b747a29bbb.scope/hugetlb.2MB.events\""
	Dec 07 23:01:49 functional-463278 containerd[3808]: time="2025-12-07T23:01:49.826275354Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b0c90499e6f16c1e1ef709c99f74a4.slice/cri-containerd-8818bd5eb1cbf3deefd6eed443a17d18a8bf5ee74e292d80063167b747a29bbb.scope/hugetlb.1GB.events\""
	Dec 07 23:01:56 functional-463278 containerd[3808]: time="2025-12-07T23:01:56.892524159Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Dec 07 23:01:57 functional-463278 containerd[3808]: time="2025-12-07T23:01:57.968279643Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 23:01:57 functional-463278 containerd[3808]: time="2025-12-07T23:01:57.968321145Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=11740"
	Dec 07 23:01:57 functional-463278 containerd[3808]: time="2025-12-07T23:01:57.969106605Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	
	
	==> coredns [8ad918e337f8d1b8a5bf5bd69ce10cebeb1144cc6e19451beb4971b6e85eb0eb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49423 - 38940 "HINFO IN 5983710415482786725.2619087190349043031. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.056989451s
	
	
	==> coredns [c8106c646b2b5fc16823350d043dc9c6ec16d210e799856486b7c76fe08c83b2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46018 - 48176 "HINFO IN 6865890663098146517.2279839593742268272. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.090558582s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-463278
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-463278
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=functional-463278
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T22_44_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 22:44:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-463278
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:01:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:01:08 +0000   Sun, 07 Dec 2025 22:44:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:01:08 +0000   Sun, 07 Dec 2025 22:44:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:01:08 +0000   Sun, 07 Dec 2025 22:44:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:01:08 +0000   Sun, 07 Dec 2025 22:44:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-463278
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                f5a83adb-14f5-4487-b0c9-2f0e171fa46e
	  Boot ID:                    60b18ead-4d86-44c7-8f9b-425eebb82278
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-g6tqf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     hello-node-connect-7d85dfc575-h4vhx           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-kk5fh                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-h8rkq                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     17m
	  kube-system                 etcd-functional-463278                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m
	  kube-system                 kindnet-8r765                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-functional-463278              250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-functional-463278     200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-k8s49                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-463278              100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4lmsb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xmmkt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node functional-463278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node functional-463278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node functional-463278 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node functional-463278 event: Registered Node functional-463278 in Controller
	  Normal  NodeReady                17m                kubelet          Node functional-463278 status is now: NodeReady
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node functional-463278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-463278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node functional-463278 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node functional-463278 event: Registered Node functional-463278 in Controller
	
	
	==> dmesg <==
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 21:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 9d 47 68 73 f8 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 51 85 40 99 2c 08 06
	[Dec 7 22:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +0.003775] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 fe e4 4c 66 38 08 06
	[ +24.208165] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	[Dec 7 22:26] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 26 10 e0 03 10 08 06
	[  +0.000351] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +6.093825] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 95 3b ef b3 e0 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e ac 25 ad 6e 00 08 06
	[ +21.488023] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e e7 41 9a 21 38 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	
	
	==> etcd [8818bd5eb1cbf3deefd6eed443a17d18a8bf5ee74e292d80063167b747a29bbb] <==
	{"level":"warn","ts":"2025-12-07T22:45:30.039545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.046098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.059538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.065500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.071745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.077937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.084034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.090594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.097141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.104352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.110852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.117485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.124202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.130747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.137443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.153632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.160669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.168476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.212502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48982","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T22:55:29.758715Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1193}
	{"level":"info","ts":"2025-12-07T22:55:29.777659Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1193,"took":"18.645782ms","hash":2407612845,"current-db-size-bytes":3903488,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":1949696,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-12-07T22:55:29.777713Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2407612845,"revision":1193,"compact-revision":-1}
	{"level":"info","ts":"2025-12-07T23:00:29.763520Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1615}
	{"level":"info","ts":"2025-12-07T23:00:29.766603Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1615,"took":"2.695773ms","hash":2056168582,"current-db-size-bytes":3903488,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2293760,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-12-07T23:00:29.766641Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2056168582,"revision":1615,"compact-revision":1193}
	
	
	==> etcd [bbe9f6a088ec6daf4376fac032c87e27c49c98b3182d7279277f7dba7f36d9e7] <==
	{"level":"warn","ts":"2025-12-07T22:44:26.429250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.436264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.442173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.452972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.459855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.466810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.514205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35976","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T22:45:09.161759Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-07T22:45:09.161847Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-463278","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-07T22:45:09.162016Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T22:45:16.163516Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T22:45:16.163619Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T22:45:16.163658Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-07T22:45:16.163688Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-07T22:45:16.163688Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163818Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163836Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163869Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163875Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-07T22:45:16.163881Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-07T22:45:16.163885Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T22:45:16.166508Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-07T22:45:16.166565Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T22:45:16.166592Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-07T22:45:16.166598Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-463278","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 23:01:58 up  1:44,  0 user,  load average: 0.27, 0.21, 0.52
	Linux functional-463278 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1aebb420b7e165d65642a376d6c5580ab3f33735dedae31a4ccec446fd1ecdb3] <==
	I1207 22:44:35.954967       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 22:44:35.955187       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1207 22:44:35.955345       1 main.go:148] setting mtu 1500 for CNI 
	I1207 22:44:35.955368       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 22:44:35.955391       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T22:44:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 22:44:36.182994       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 22:44:36.183470       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 22:44:36.183518       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 22:44:36.183832       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 22:44:36.484277       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 22:44:36.484299       1 metrics.go:72] Registering metrics
	I1207 22:44:36.484346       1 controller.go:711] "Syncing nftables rules"
	I1207 22:44:46.184838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:44:46.184893       1 main.go:301] handling current node
	I1207 22:44:56.191643       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:44:56.191687       1 main.go:301] handling current node
	I1207 22:45:06.188919       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:45:06.188951       1 main.go:301] handling current node
	
	
	==> kindnet [c01638eb7603f7b8f2234eb7b4f686364cfd3fa038a503c53377d321b2d25138] <==
	I1207 22:59:49.592792       1 main.go:301] handling current node
	I1207 22:59:59.598126       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:59:59.598169       1 main.go:301] handling current node
	I1207 23:00:09.594581       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:00:09.594613       1 main.go:301] handling current node
	I1207 23:00:19.591998       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:00:19.592033       1 main.go:301] handling current node
	I1207 23:00:29.594313       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:00:29.594369       1 main.go:301] handling current node
	I1207 23:00:39.600396       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:00:39.600438       1 main.go:301] handling current node
	I1207 23:00:49.591354       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:00:49.591398       1 main.go:301] handling current node
	I1207 23:00:59.594643       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:00:59.594677       1 main.go:301] handling current node
	I1207 23:01:09.591600       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:01:09.591638       1 main.go:301] handling current node
	I1207 23:01:19.599972       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:01:19.600000       1 main.go:301] handling current node
	I1207 23:01:29.593203       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:01:29.593236       1 main.go:301] handling current node
	I1207 23:01:39.599346       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:01:39.599378       1 main.go:301] handling current node
	I1207 23:01:49.600008       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:01:49.600043       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2bb308b96eda42ec0ad4cc021e353ed01f9914a9f38712d6fe8e6243d819e3db] <==
	I1207 22:45:30.639727       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1207 22:45:30.639750       1 policy_source.go:240] refreshing policies
	I1207 22:45:30.642921       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 22:45:30.676523       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 22:45:30.683392       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 22:45:30.877540       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:30.877540       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:31.540362       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1207 22:45:31.745456       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1207 22:45:31.746522       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 22:45:31.750270       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 22:45:32.224588       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 22:45:32.307847       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 22:45:32.351106       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 22:45:32.356452       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 22:45:39.118928       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 22:45:49.974663       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.22.34"}
	I1207 22:45:54.183552       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.226.250"}
	I1207 22:46:04.555257       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.91.117"}
	I1207 22:46:04.940065       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 22:46:05.042815       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.100.228"}
	I1207 22:46:05.053530       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.49.67"}
	I1207 22:51:06.223470       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.123.117"}
	I1207 22:51:57.073341       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.24.204"}
	I1207 22:55:30.584933       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [03e16e4fab102ebca442c6bfb3bbf1506144ba4ce15c3842a23b812ecedd698e] <==
	I1207 22:45:18.193158       1 shared_informer.go:349] "Waiting for caches to sync" controller="taint"
	I1207 22:45:18.195271       1 controllermanager.go:781] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1207 22:45:18.195294       1 controllermanager.go:759] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1207 22:45:18.195400       1 attach_detach_controller.go:336] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1207 22:45:18.195417       1 shared_informer.go:349] "Waiting for caches to sync" controller="attach detach"
	I1207 22:45:18.197804       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1207 22:45:18.197829       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-serving"
	I1207 22:45:18.197849       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.198255       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1207 22:45:18.198279       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-client"
	I1207 22:45:18.198288       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.198813       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1207 22:45:18.198845       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kube-apiserver-client"
	I1207 22:45:18.198866       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.199293       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1207 22:45:18.199316       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="podcertificaterequest-cleaner-controller" requiredFeatureGates=["PodCertificateRequest"]
	I1207 22:45:18.199490       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1207 22:45:18.199502       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-legacy-unknown"
	I1207 22:45:18.199521       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.454250       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1207 22:45:18.454300       1 controllermanager.go:781] "Started controller" controller="node-ipam-controller"
	I1207 22:45:18.454309       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="node-route-controller"
	I1207 22:45:18.454514       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1207 22:45:18.454532       1 shared_informer.go:349] "Waiting for caches to sync" controller="node"
	F1207 22:45:18.454685       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/pv-protection-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-controller-manager [a4a323a1e2f645854da5df61fb2239f00b95df2847c639bb7e82e2a9aee68198] <==
	I1207 22:45:33.954722       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1207 22:45:33.981269       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1207 22:45:33.981349       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1207 22:45:33.981379       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1207 22:45:33.981353       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 22:45:33.981423       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1207 22:45:33.981428       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1207 22:45:33.981439       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1207 22:45:33.981499       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1207 22:45:33.981570       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1207 22:45:33.982774       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1207 22:45:33.982803       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1207 22:45:33.982858       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1207 22:45:33.982877       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1207 22:45:33.982949       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1207 22:45:33.984738       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1207 22:45:33.985928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 22:45:33.988170       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1207 22:45:34.004392       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1207 22:46:04.986011       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.990408       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.994096       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.994195       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.997536       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:05.002933       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [241e1affb6fa46a9506a75655e9de27f3e048cac663b0a5a06aef503e652daed] <==
	I1207 22:44:35.482903       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:44:35.541247       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 22:44:35.642069       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 22:44:35.642115       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:44:35.642234       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:44:35.661982       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:44:35.662032       1 server_linux.go:132] "Using iptables Proxier"
	I1207 22:44:35.667042       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:44:35.667374       1 server.go:527] "Version info" version="v1.34.2"
	I1207 22:44:35.667400       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:44:35.668618       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:44:35.668651       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:44:35.668670       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:44:35.668742       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:44:35.668749       1 config.go:309] "Starting node config controller"
	I1207 22:44:35.668757       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:44:35.668764       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:44:35.668748       1 config.go:200] "Starting service config controller"
	I1207 22:44:35.668773       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:44:35.768866       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:44:35.768912       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:44:35.769022       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [de7501abd974adb785be3742251d21b68ca623f7b1fc179faa6032be7e09ebeb] <==
	I1207 22:45:09.306366       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 22:45:09.407512       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 22:45:09.407557       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:45:09.408004       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:45:09.429174       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:45:09.429227       1 server_linux.go:132] "Using iptables Proxier"
	I1207 22:45:09.434622       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:45:09.434936       1 server.go:527] "Version info" version="v1.34.2"
	I1207 22:45:09.434973       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:45:09.436217       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:45:09.436231       1 config.go:200] "Starting service config controller"
	I1207 22:45:09.436245       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:45:09.436251       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:45:09.436274       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:45:09.436289       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:45:09.436398       1 config.go:309] "Starting node config controller"
	I1207 22:45:09.436416       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:45:09.436425       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:45:09.537119       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:45:09.537175       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:45:09.537193       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	E1207 22:45:30.585918       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:45:30.586265       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1207 22:45:30.586266       1 reflector.go:205] "Failed to watch" err="nodes \"functional-463278\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:45:30.586264       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	
	
	==> kube-scheduler [2f92bec0616550c97f8ee1a3211b8813d3424295e7164e828d1c5b432b8d79fe] <==
	I1207 22:45:18.203451       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 22:45:18.203468       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 22:45:18.203454       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1207 22:45:18.203876       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 22:45:18.203919       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 22:45:18.304448       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1207 22:45:18.304470       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 22:45:18.304579       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1207 22:45:30.559615       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:45:30.559636       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:45:30.566816       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 22:45:30.566860       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 22:45:30.566880       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1207 22:45:30.566895       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:45:30.566929       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:45:30.566953       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1207 22:45:30.567950       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 22:45:30.568153       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:45:30.568339       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1207 22:45:30.569325       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:45:30.569581       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 22:45:30.569607       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:45:30.577884       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1207 22:45:30.581596       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1207 22:45:30.582317       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	
	
	==> kube-scheduler [c029b7ba4e0eb3389fbfbfbaf6190b61ae6eebcde41377e62c6e973e8138ad72] <==
	E1207 22:44:26.890579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:44:26.890538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 22:44:26.890639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:44:26.890644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1207 22:44:26.890681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:44:26.890685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 22:44:26.890794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:44:27.715111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 22:44:27.728196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:44:27.769160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 22:44:27.774090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 22:44:27.792212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:44:27.900910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:44:27.908824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1207 22:44:27.921774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:44:27.971171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:44:28.036376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:44:28.098446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1207 22:44:31.286992       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:45:09.024037       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1207 22:45:09.024135       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:45:09.024235       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1207 22:45:09.024258       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1207 22:45:09.024272       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1207 22:45:09.024292       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 07 23:01:57 functional-463278 kubelet[4931]:         rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	Dec 07 23:01:57 functional-463278 kubelet[4931]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 23:01:57 functional-463278 kubelet[4931]:  > image="kicbase/echo-server:latest"
	Dec 07 23:01:57 functional-463278 kubelet[4931]: E1207 23:01:57.968668    4931 kuberuntime_image.go:43] "Failed to pull image" err=<
	Dec 07 23:01:57 functional-463278 kubelet[4931]:         failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	Dec 07 23:01:57 functional-463278 kubelet[4931]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 23:01:57 functional-463278 kubelet[4931]:  > image="kicbase/echo-server:latest"
	Dec 07 23:01:57 functional-463278 kubelet[4931]: E1207 23:01:57.968855    4931 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Dec 07 23:01:57 functional-463278 kubelet[4931]:         container echo-server start failed in pod hello-node-75c85bcc94-g6tqf_default(868ae89c-0be5-4a7d-8bfe-31984dcc4024): ErrImagePull: failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	Dec 07 23:01:57 functional-463278 kubelet[4931]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 23:01:57 functional-463278 kubelet[4931]:  > logger="UnhandledError"
	Dec 07 23:01:57 functional-463278 kubelet[4931]: E1207 23:01:57.968913    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-g6tqf" podUID="868ae89c-0be5-4a7d-8bfe-31984dcc4024"
	Dec 07 23:01:58 functional-463278 kubelet[4931]: E1207 23:01:58.864057    4931 log.go:32] "PullImage from image service failed" err=<
	Dec 07 23:01:58 functional-463278 kubelet[4931]:         rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests
	Dec 07 23:01:58 functional-463278 kubelet[4931]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 23:01:58 functional-463278 kubelet[4931]:  > image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 23:01:58 functional-463278 kubelet[4931]: E1207 23:01:58.864116    4931 kuberuntime_image.go:43] "Failed to pull image" err=<
	Dec 07 23:01:58 functional-463278 kubelet[4931]:         failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests
	Dec 07 23:01:58 functional-463278 kubelet[4931]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 23:01:58 functional-463278 kubelet[4931]:  > image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 23:01:58 functional-463278 kubelet[4931]: E1207 23:01:58.864221    4931 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Dec 07 23:01:58 functional-463278 kubelet[4931]:         container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-xmmkt_kubernetes-dashboard(202eaeac-4520-44e8-989e-8b210bdf6bbf): ErrImagePull: failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests
	Dec 07 23:01:58 functional-463278 kubelet[4931]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 23:01:58 functional-463278 kubelet[4931]:  > logger="UnhandledError"
	Dec 07 23:01:58 functional-463278 kubelet[4931]: E1207 23:01:58.864265    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xmmkt" podUID="202eaeac-4520-44e8-989e-8b210bdf6bbf"
	
	
	==> storage-provisioner [b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c] <==
	W1207 23:01:34.508945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:36.512429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:36.516172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:38.519393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:38.524268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:40.526940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:40.531730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:42.534552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:42.538340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:44.541151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:44.544892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:46.548310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:46.552985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:48.556568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:48.560181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:50.563252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:50.568279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:52.571658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:52.575416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:54.578077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:54.582900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:56.586241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:56.589899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:58.592909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:58.597781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f17929caca3262a3a1b351d4290ad2ef17b04e0326dd1bbaaaad3c4f6ecc1761] <==
	W1207 22:44:46.734205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:46.737325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 22:44:46.832545       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-463278_fdf61bc9-6000-426f-adc4-cbf24786c29e!
	W1207 22:44:48.739998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:48.743651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:50.747390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:50.751564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:52.754931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:52.758942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:54.761505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:54.766768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:56.770510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:56.774194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:58.777335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:58.782858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:00.786501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:00.790145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:02.793348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:02.798083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:04.800761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:04.804484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:06.807470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:06.811061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.814264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.818406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-463278 -n functional-463278
helpers_test.go:269: (dbg) Run:  kubectl --context functional-463278 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-g6tqf hello-node-connect-7d85dfc575-h4vhx mysql-5bb876957f-kk5fh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-463278 describe pod busybox-mount hello-node-75c85bcc94-g6tqf hello-node-connect-7d85dfc575-h4vhx mysql-5bb876957f-kk5fh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-463278 describe pod busybox-mount hello-node-75c85bcc94-g6tqf hello-node-connect-7d85dfc575-h4vhx mysql-5bb876957f-kk5fh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt: exit status 1 (93.906016ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:56 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  containerd://e75f0ea836e871d2e96be7ca7f972756573bf9e52f1b5f150b60e8b28052bef3
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 07 Dec 2025 22:45:57 +0000
	      Finished:     Sun, 07 Dec 2025 22:45:57 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nlmhb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-nlmhb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  16m   default-scheduler  Successfully assigned default/busybox-mount to functional-463278
	  Normal  Pulling    16m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     16m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 625ms (625ms including waiting). Image size: 2395207 bytes.
	  Normal  Created    16m   kubelet            Created container: mount-munger
	  Normal  Started    16m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-g6tqf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:54 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfpqk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tfpqk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  16m                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-g6tqf to functional-463278
	  Warning  Failed     14m (x2 over 16m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  13m (x5 over 16m)  kubelet  Pulling image "kicbase/echo-server"
	  Warning  Failed   13m (x5 over 16m)  kubelet  Error: ErrImagePull
	  Warning  Failed   13m (x3 over 15m)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff  57s (x65 over 16m)  kubelet  Back-off pulling image "kicbase/echo-server"
	  Warning  Failed   57s (x65 over 16m)  kubelet  Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-h4vhx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:51:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l7v52 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-l7v52:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-h4vhx to functional-463278
	  Normal   Pulling    7m14s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m13s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   7m13s (x5 over 10m)   kubelet  Error: ErrImagePull
	  Warning  Failed   4m57s (x20 over 10m)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m42s (x21 over 10m)  kubelet  Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-kk5fh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:51:06 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7b56l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7b56l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-kk5fh to functional-463278
	  Warning  Failed     9m29s (x3 over 10m)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  7m56s (x5 over 10m)  kubelet  Pulling image "docker.io/mysql:5.7"
	  Warning  Failed   7m55s (x2 over 10m)  kubelet  Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   7m55s (x5 over 10m)  kubelet  Error: ErrImagePull
	  Normal   BackOff  50s (x42 over 10m)   kubelet  Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed   50s (x42 over 10m)   kubelet  Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:46:04 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkl8s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fkl8s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/nginx-svc to functional-463278
	  Normal   Pulling    12m (x5 over 15m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     12m (x5 over 15m)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   12m (x5 over 15m)   kubelet  Error: ErrImagePull
	  Normal   BackOff  46s (x64 over 15m)  kubelet  Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed   46s (x64 over 15m)  kubelet  Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:46:04 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jmx4b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-jmx4b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/sp-pod to functional-463278
	  Normal   Pulling    12m (x5 over 15m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     12m (x5 over 15m)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   12m (x5 over 15m)   kubelet  Error: ErrImagePull
	  Normal   BackOff  47s (x63 over 15m)  kubelet  Back-off pulling image "docker.io/nginx"
	  Warning  Failed   47s (x63 over 15m)  kubelet  Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-4lmsb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xmmkt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-463278 describe pod busybox-mount hello-node-75c85bcc94-g6tqf hello-node-connect-7d85dfc575-h4vhx mysql-5bb876957f-kk5fh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.74s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (367.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [bdf470a8-cc57-40ba-8f7f-f530ce462a2e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00451302s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-463278 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-463278 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-463278 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-463278 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6fc1a713-e1a4-4403-94de-bcd12759e727] Pending
helpers_test.go:352: "sp-pod" [6fc1a713-e1a4-4403-94de-bcd12759e727] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1207 22:48:16.814735  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:48:44.523392  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-463278 -n functional-463278
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-12-07 22:52:04.691641554 +0000 UTC m=+1264.844307627
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-463278 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-463278 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-463278/192.168.49.2
Start Time:       Sun, 07 Dec 2025 22:46:04 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:  10.244.0.6
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jmx4b (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-jmx4b:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/sp-pod to functional-463278
Normal   Pulling    3m4s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m3s (x5 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   3m3s (x5 over 5m59s)  kubelet  Error: ErrImagePull
Warning  Failed   49s (x20 over 5m58s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  38s (x21 over 5m58s)  kubelet  Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-463278 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-463278 logs sp-pod -n default: exit status 1 (65.532527ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-463278 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-463278
helpers_test.go:243: (dbg) docker inspect functional-463278:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478",
	        "Created": "2025-12-07T22:44:12.758544687Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 412276,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T22:44:12.792640745Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/hosts",
	        "LogPath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478-json.log",
	        "Name": "/functional-463278",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-463278:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-463278",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478",
	                "LowerDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b-init/diff:/var/lib/docker/overlay2/3cf0cc8fa8911fedbf601d360c4398569b0f30cd394076654dad88848c4dafaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-463278",
	                "Source": "/var/lib/docker/volumes/functional-463278/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-463278",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-463278",
	                "name.minikube.sigs.k8s.io": "functional-463278",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "73fcdda28fbda839afd47005c2057d1600198ad8b4bf801d9a3ceddf616c1b58",
	            "SandboxKey": "/var/run/docker/netns/73fcdda28fbd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-463278": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "018e6fa20a7dd3d69ef98e40b8862683b0183989c49c03038371ca101d5bfc97",
	                    "EndpointID": "e99fd6022d692fa69afcf7bcf689b400a636b35ef2585d88efa596fe15564320",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "56:db:6f:c4:2e:40",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-463278",
	                        "5ffb24e50471"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-463278 -n functional-463278
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-463278 logs -n 25: (1.15653479s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                                 ARGS                                                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image     │ functional-463278 image ls                                                                                                                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh findmnt -T /mount-9p | grep 9p                                                                                                                  │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ image     │ functional-463278 image save kicbase/echo-server:functional-463278 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh -- ls -la /mount-9p                                                                                                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ image     │ functional-463278 image rm kicbase/echo-server:functional-463278 --alsologtostderr                                                                                    │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ image     │ functional-463278 image ls                                                                                                                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh sudo umount -f /mount-9p                                                                                                                        │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ image     │ functional-463278 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr                                       │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ mount     │ -p functional-463278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup399378081/001:/mount2 --alsologtostderr -v=1                                                     │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ mount     │ -p functional-463278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup399378081/001:/mount1 --alsologtostderr -v=1                                                     │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ mount     │ -p functional-463278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup399378081/001:/mount3 --alsologtostderr -v=1                                                     │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ ssh       │ functional-463278 ssh findmnt -T /mount1                                                                                                                              │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ image     │ functional-463278 image ls                                                                                                                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ image     │ functional-463278 image save --daemon kicbase/echo-server:functional-463278 --alsologtostderr                                                                         │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh sudo cat /etc/test/nested/copy/375863/hosts                                                                                                     │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh findmnt -T /mount1                                                                                                                              │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh echo hello                                                                                                                                      │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh findmnt -T /mount2                                                                                                                              │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh cat /etc/hostname                                                                                                                               │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh       │ functional-463278 ssh findmnt -T /mount3                                                                                                                              │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ tunnel    │ functional-463278 tunnel --alsologtostderr                                                                                                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ tunnel    │ functional-463278 tunnel --alsologtostderr                                                                                                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ mount     │ -p functional-463278 --kill=true                                                                                                                                      │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-463278 --alsologtostderr -v=1                                                                                                        │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ tunnel    │ functional-463278 tunnel --alsologtostderr                                                                                                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:45:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:45:57.091874  423436 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:45:57.091995  423436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:45:57.092005  423436 out.go:374] Setting ErrFile to fd 2...
	I1207 22:45:57.092011  423436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:45:57.092185  423436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 22:45:57.092601  423436 out.go:368] Setting JSON to false
	I1207 22:45:57.093579  423436 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5306,"bootTime":1765142251,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:45:57.093633  423436 start.go:143] virtualization: kvm guest
	I1207 22:45:57.095151  423436 out.go:179] * [functional-463278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:45:57.096384  423436 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:45:57.096396  423436 notify.go:221] Checking for updates...
	I1207 22:45:57.098316  423436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:45:57.099355  423436 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 22:45:57.100301  423436 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 22:45:57.101400  423436 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:45:57.102435  423436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:45:57.103770  423436 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 22:45:57.104272  423436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:45:57.126808  423436 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:45:57.126897  423436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:45:57.180398  423436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:62 SystemTime:2025-12-07 22:45:57.171174532 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:45:57.180563  423436 docker.go:319] overlay module found
	I1207 22:45:57.181969  423436 out.go:179] * Using the docker driver based on existing profile
	I1207 22:45:57.182881  423436 start.go:309] selected driver: docker
	I1207 22:45:57.182896  423436 start.go:927] validating driver "docker" against &{Name:functional-463278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-463278 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:45:57.183005  423436 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:45:57.183099  423436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:45:57.235635  423436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:72 SystemTime:2025-12-07 22:45:57.226815895 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:45:57.236268  423436 cni.go:84] Creating CNI manager for ""
	I1207 22:45:57.236345  423436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 22:45:57.236398  423436 start.go:353] cluster config:
	{Name:functional-463278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-463278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:45:57.238530  423436 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e75f0ea836e87       56cc512116c8f       6 minutes ago       Exited              mount-munger              0                   6198658580680       busybox-mount                               default
	2bb308b96eda4       a5f569d49a979       6 minutes ago       Running             kube-apiserver            0                   afb596781aa30       kube-apiserver-functional-463278            kube-system
	a4a323a1e2f64       01e8bacf0f500       6 minutes ago       Running             kube-controller-manager   2                   5eb66decae12e       kube-controller-manager-functional-463278   kube-system
	8818bd5eb1cbf       a3e246e9556e9       6 minutes ago       Running             etcd                      1                   25e3962e39ce3       etcd-functional-463278                      kube-system
	03e16e4fab102       01e8bacf0f500       6 minutes ago       Exited              kube-controller-manager   1                   5eb66decae12e       kube-controller-manager-functional-463278   kube-system
	b99f837de77a9       6e38f40d628db       6 minutes ago       Running             storage-provisioner       1                   88b5c757605a6       storage-provisioner                         kube-system
	2f92bec061655       88320b5498ff2       6 minutes ago       Running             kube-scheduler            1                   d5d9a4864e7cc       kube-scheduler-functional-463278            kube-system
	8ad918e337f8d       52546a367cc9e       6 minutes ago       Running             coredns                   1                   b1f6971889076       coredns-66bc5c9577-h8rkq                    kube-system
	c01638eb7603f       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   9fbdd61d5e1fa       kindnet-8r765                               kube-system
	de7501abd974a       8aa150647e88a       6 minutes ago       Running             kube-proxy                1                   a67aeb63180da       kube-proxy-k8s49                            kube-system
	c8106c646b2b5       52546a367cc9e       7 minutes ago       Exited              coredns                   0                   b1f6971889076       coredns-66bc5c9577-h8rkq                    kube-system
	f17929caca326       6e38f40d628db       7 minutes ago       Exited              storage-provisioner       0                   88b5c757605a6       storage-provisioner                         kube-system
	1aebb420b7e16       409467f978b4a       7 minutes ago       Exited              kindnet-cni               0                   9fbdd61d5e1fa       kindnet-8r765                               kube-system
	241e1affb6fa4       8aa150647e88a       7 minutes ago       Exited              kube-proxy                0                   a67aeb63180da       kube-proxy-k8s49                            kube-system
	c029b7ba4e0eb       88320b5498ff2       7 minutes ago       Exited              kube-scheduler            0                   d5d9a4864e7cc       kube-scheduler-functional-463278            kube-system
	bbe9f6a088ec6       a3e246e9556e9       7 minutes ago       Exited              etcd                      0                   25e3962e39ce3       etcd-functional-463278                      kube-system
	
	
	==> containerd <==
	Dec 07 22:51:52 functional-463278 containerd[3808]: time="2025-12-07T22:51:52.892415631Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Dec 07 22:51:53 functional-463278 containerd[3808]: time="2025-12-07T22:51:53.789680289Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:51:53 functional-463278 containerd[3808]: time="2025-12-07T22:51:53.789719691Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Dec 07 22:51:57 functional-463278 containerd[3808]: time="2025-12-07T22:51:57.331083426Z" level=info msg="RunPodSandbox for name:\"hello-node-connect-7d85dfc575-h4vhx\"  uid:\"6311e70b-1eaf-4894-8481-b6038190b988\"  namespace:\"default\""
	Dec 07 22:51:57 functional-463278 containerd[3808]: time="2025-12-07T22:51:57.359581861Z" level=info msg="connecting to shim b10a6e479d3be9e8d5a92111ee8d2cb0a22827e573d033cacca8a5d659751e09" address="unix:///run/containerd/s/5b6fe9ffa17194508937b73dad3f171901b618bbe24588a29db7c1962bb5c2e3" namespace=k8s.io protocol=ttrpc version=3
	Dec 07 22:51:57 functional-463278 containerd[3808]: time="2025-12-07T22:51:57.422091922Z" level=info msg="RunPodSandbox for name:\"hello-node-connect-7d85dfc575-h4vhx\"  uid:\"6311e70b-1eaf-4894-8481-b6038190b988\"  namespace:\"default\" returns sandbox id \"b10a6e479d3be9e8d5a92111ee8d2cb0a22827e573d033cacca8a5d659751e09\""
	Dec 07 22:51:57 functional-463278 containerd[3808]: time="2025-12-07T22:51:57.423365851Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.319125854Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.319161969Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.662255798Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e9ebbd3d601e2f2313533693bca03e6.slice/cri-containerd-2f92bec0616550c97f8ee1a3211b8813d3424295e7164e828d1c5b432b8d79fe.scope/hugetlb.2MB.events\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.662347618Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e9ebbd3d601e2f2313533693bca03e6.slice/cri-containerd-2f92bec0616550c97f8ee1a3211b8813d3424295e7164e828d1c5b432b8d79fe.scope/hugetlb.1GB.events\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.663252025Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab88a548_17f8_47a7_a570_18c415f65178.slice/cri-containerd-de7501abd974adb785be3742251d21b68ca623f7b1fc179faa6032be7e09ebeb.scope/hugetlb.2MB.events\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.663359597Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab88a548_17f8_47a7_a570_18c415f65178.slice/cri-containerd-de7501abd974adb785be3742251d21b68ca623f7b1fc179faa6032be7e09ebeb.scope/hugetlb.1GB.events\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.664080678Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac5509e6a3d6be05ab3675ee8e30959.slice/cri-containerd-a4a323a1e2f645854da5df61fb2239f00b95df2847c639bb7e82e2a9aee68198.scope/hugetlb.2MB.events\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.664167205Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac5509e6a3d6be05ab3675ee8e30959.slice/cri-containerd-a4a323a1e2f645854da5df61fb2239f00b95df2847c639bb7e82e2a9aee68198.scope/hugetlb.1GB.events\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.664847935Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefebb8e185ecbbce9100dae8ce48b206.slice/cri-containerd-2bb308b96eda42ec0ad4cc021e353ed01f9914a9f38712d6fe8e6243d819e3db.scope/hugetlb.2MB.events\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.664939681Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefebb8e185ecbbce9100dae8ce48b206.slice/cri-containerd-2bb308b96eda42ec0ad4cc021e353ed01f9914a9f38712d6fe8e6243d819e3db.scope/hugetlb.1GB.events\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.665621747Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-podd60fdd96_54b9_4774_931c_7c2b63e717cd.slice/cri-containerd-c01638eb7603f7b8f2234eb7b4f686364cfd3fa038a503c53377d321b2d25138.scope/hugetlb.2MB.events\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.665731314Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-podd60fdd96_54b9_4774_931c_7c2b63e717cd.slice/cri-containerd-c01638eb7603f7b8f2234eb7b4f686364cfd3fa038a503c53377d321b2d25138.scope/hugetlb.1GB.events\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.666526524Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b0c90499e6f16c1e1ef709c99f74a4.slice/cri-containerd-8818bd5eb1cbf3deefd6eed443a17d18a8bf5ee74e292d80063167b747a29bbb.scope/hugetlb.2MB.events\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.666651138Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b0c90499e6f16c1e1ef709c99f74a4.slice/cri-containerd-8818bd5eb1cbf3deefd6eed443a17d18a8bf5ee74e292d80063167b747a29bbb.scope/hugetlb.1GB.events\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.667456838Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d26d291_e543_46cb_b3eb_fb46c80d9971.slice/cri-containerd-8ad918e337f8d1b8a5bf5bd69ce10cebeb1144cc6e19451beb4971b6e85eb0eb.scope/hugetlb.2MB.events\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.667557702Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d26d291_e543_46cb_b3eb_fb46c80d9971.slice/cri-containerd-8ad918e337f8d1b8a5bf5bd69ce10cebeb1144cc6e19451beb4971b6e85eb0eb.scope/hugetlb.1GB.events\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.668194428Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf470a8_cc57_40ba_8f7f_f530ce462a2e.slice/cri-containerd-b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c.scope/hugetlb.2MB.events\""
	Dec 07 22:51:58 functional-463278 containerd[3808]: time="2025-12-07T22:51:58.668267703Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf470a8_cc57_40ba_8f7f_f530ce462a2e.slice/cri-containerd-b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c.scope/hugetlb.1GB.events\""
	
	
	==> coredns [8ad918e337f8d1b8a5bf5bd69ce10cebeb1144cc6e19451beb4971b6e85eb0eb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49423 - 38940 "HINFO IN 5983710415482786725.2619087190349043031. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.056989451s
	
	
	==> coredns [c8106c646b2b5fc16823350d043dc9c6ec16d210e799856486b7c76fe08c83b2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46018 - 48176 "HINFO IN 6865890663098146517.2279839593742268272. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.090558582s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-463278
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-463278
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=functional-463278
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T22_44_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 22:44:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-463278
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 22:51:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 22:50:06 +0000   Sun, 07 Dec 2025 22:44:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 22:50:06 +0000   Sun, 07 Dec 2025 22:44:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 22:50:06 +0000   Sun, 07 Dec 2025 22:44:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 22:50:06 +0000   Sun, 07 Dec 2025 22:44:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-463278
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                f5a83adb-14f5-4487-b0c9-2f0e171fa46e
	  Boot ID:                    60b18ead-4d86-44c7-8f9b-425eebb82278
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-g6tqf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  default                     hello-node-connect-7d85dfc575-h4vhx           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  default                     mysql-5bb876957f-kk5fh                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     59s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-h8rkq                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m30s
	  kube-system                 etcd-functional-463278                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m36s
	  kube-system                 kindnet-8r765                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m31s
	  kube-system                 kube-apiserver-functional-463278              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-controller-manager-functional-463278     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m36s
	  kube-system                 kube-proxy-k8s49                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m31s
	  kube-system                 kube-scheduler-functional-463278              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m36s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m31s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4lmsb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xmmkt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m30s                  kube-proxy       
	  Normal  Starting                 6m56s                  kube-proxy       
	  Normal  Starting                 7m37s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m36s                  kubelet          Node functional-463278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s                  kubelet          Node functional-463278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m36s                  kubelet          Node functional-463278 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m32s                  node-controller  Node functional-463278 event: Registered Node functional-463278 in Controller
	  Normal  NodeReady                7m19s                  kubelet          Node functional-463278 status is now: NodeReady
	  Normal  Starting                 6m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m38s (x8 over 6m38s)  kubelet          Node functional-463278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m38s (x8 over 6m38s)  kubelet          Node functional-463278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m38s (x7 over 6m38s)  kubelet          Node functional-463278 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m32s                  node-controller  Node functional-463278 event: Registered Node functional-463278 in Controller
	
	
	==> dmesg <==
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 21:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 9d 47 68 73 f8 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 51 85 40 99 2c 08 06
	[Dec 7 22:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +0.003775] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 fe e4 4c 66 38 08 06
	[ +24.208165] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	[Dec 7 22:26] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 26 10 e0 03 10 08 06
	[  +0.000351] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +6.093825] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 95 3b ef b3 e0 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e ac 25 ad 6e 00 08 06
	[ +21.488023] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e e7 41 9a 21 38 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	
	
	==> etcd [8818bd5eb1cbf3deefd6eed443a17d18a8bf5ee74e292d80063167b747a29bbb] <==
	{"level":"warn","ts":"2025-12-07T22:45:29.999927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.005859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.014795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.021623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.027533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.033502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.039545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.046098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.059538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.065500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.071745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.077937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.084034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.090594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.097141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.104352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.110852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.117485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.124202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.130747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.137443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.153632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.160669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.168476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.212502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48982","server-name":"","error":"EOF"}
	
	
	==> etcd [bbe9f6a088ec6daf4376fac032c87e27c49c98b3182d7279277f7dba7f36d9e7] <==
	{"level":"warn","ts":"2025-12-07T22:44:26.429250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.436264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.442173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.452972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.459855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.466810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.514205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35976","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T22:45:09.161759Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-07T22:45:09.161847Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-463278","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-07T22:45:09.162016Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T22:45:16.163516Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T22:45:16.163619Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T22:45:16.163658Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-07T22:45:16.163688Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-07T22:45:16.163688Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163818Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163836Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163869Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163875Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-07T22:45:16.163881Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-07T22:45:16.163885Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T22:45:16.166508Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-07T22:45:16.166565Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T22:45:16.166592Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-07T22:45:16.166598Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-463278","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:52:05 up  1:34,  0 user,  load average: 0.15, 0.40, 0.87
	Linux functional-463278 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1aebb420b7e165d65642a376d6c5580ab3f33735dedae31a4ccec446fd1ecdb3] <==
	I1207 22:44:35.954967       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 22:44:35.955187       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1207 22:44:35.955345       1 main.go:148] setting mtu 1500 for CNI 
	I1207 22:44:35.955368       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 22:44:35.955391       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T22:44:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 22:44:36.182994       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 22:44:36.183470       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 22:44:36.183518       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 22:44:36.183832       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 22:44:36.484277       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 22:44:36.484299       1 metrics.go:72] Registering metrics
	I1207 22:44:36.484346       1 controller.go:711] "Syncing nftables rules"
	I1207 22:44:46.184838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:44:46.184893       1 main.go:301] handling current node
	I1207 22:44:56.191643       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:44:56.191687       1 main.go:301] handling current node
	I1207 22:45:06.188919       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:45:06.188951       1 main.go:301] handling current node
	
	
	==> kindnet [c01638eb7603f7b8f2234eb7b4f686364cfd3fa038a503c53377d321b2d25138] <==
	I1207 22:49:59.592572       1 main.go:301] handling current node
	I1207 22:50:09.592839       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:50:09.592879       1 main.go:301] handling current node
	I1207 22:50:19.599889       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:50:19.599923       1 main.go:301] handling current node
	I1207 22:50:29.592811       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:50:29.592844       1 main.go:301] handling current node
	I1207 22:50:39.592817       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:50:39.592854       1 main.go:301] handling current node
	I1207 22:50:49.600092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:50:49.600125       1 main.go:301] handling current node
	I1207 22:50:59.592809       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:50:59.592841       1 main.go:301] handling current node
	I1207 22:51:09.592177       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:51:09.592207       1 main.go:301] handling current node
	I1207 22:51:19.600074       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:51:19.600109       1 main.go:301] handling current node
	I1207 22:51:29.591987       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:51:29.592028       1 main.go:301] handling current node
	I1207 22:51:39.594753       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:51:39.594793       1 main.go:301] handling current node
	I1207 22:51:49.592859       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:51:49.592917       1 main.go:301] handling current node
	I1207 22:51:59.592331       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:51:59.592364       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2bb308b96eda42ec0ad4cc021e353ed01f9914a9f38712d6fe8e6243d819e3db] <==
	I1207 22:45:30.638442       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1207 22:45:30.639727       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1207 22:45:30.639750       1 policy_source.go:240] refreshing policies
	I1207 22:45:30.642921       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 22:45:30.676523       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 22:45:30.683392       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 22:45:30.877540       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:30.877540       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:31.540362       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1207 22:45:31.745456       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1207 22:45:31.746522       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 22:45:31.750270       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 22:45:32.224588       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 22:45:32.307847       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 22:45:32.351106       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 22:45:32.356452       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 22:45:39.118928       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 22:45:49.974663       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.22.34"}
	I1207 22:45:54.183552       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.226.250"}
	I1207 22:46:04.555257       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.91.117"}
	I1207 22:46:04.940065       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 22:46:05.042815       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.100.228"}
	I1207 22:46:05.053530       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.49.67"}
	I1207 22:51:06.223470       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.123.117"}
	I1207 22:51:57.073341       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.24.204"}
	
	
	==> kube-controller-manager [03e16e4fab102ebca442c6bfb3bbf1506144ba4ce15c3842a23b812ecedd698e] <==
	I1207 22:45:18.193158       1 shared_informer.go:349] "Waiting for caches to sync" controller="taint"
	I1207 22:45:18.195271       1 controllermanager.go:781] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1207 22:45:18.195294       1 controllermanager.go:759] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1207 22:45:18.195400       1 attach_detach_controller.go:336] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1207 22:45:18.195417       1 shared_informer.go:349] "Waiting for caches to sync" controller="attach detach"
	I1207 22:45:18.197804       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1207 22:45:18.197829       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-serving"
	I1207 22:45:18.197849       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.198255       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1207 22:45:18.198279       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-client"
	I1207 22:45:18.198288       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.198813       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1207 22:45:18.198845       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kube-apiserver-client"
	I1207 22:45:18.198866       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.199293       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1207 22:45:18.199316       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="podcertificaterequest-cleaner-controller" requiredFeatureGates=["PodCertificateRequest"]
	I1207 22:45:18.199490       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1207 22:45:18.199502       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-legacy-unknown"
	I1207 22:45:18.199521       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.454250       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1207 22:45:18.454300       1 controllermanager.go:781] "Started controller" controller="node-ipam-controller"
	I1207 22:45:18.454309       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="node-route-controller"
	I1207 22:45:18.454514       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1207 22:45:18.454532       1 shared_informer.go:349] "Waiting for caches to sync" controller="node"
	F1207 22:45:18.454685       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/pv-protection-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-controller-manager [a4a323a1e2f645854da5df61fb2239f00b95df2847c639bb7e82e2a9aee68198] <==
	I1207 22:45:33.954722       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1207 22:45:33.981269       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1207 22:45:33.981349       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1207 22:45:33.981379       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1207 22:45:33.981353       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 22:45:33.981423       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1207 22:45:33.981428       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1207 22:45:33.981439       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1207 22:45:33.981499       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1207 22:45:33.981570       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1207 22:45:33.982774       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1207 22:45:33.982803       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1207 22:45:33.982858       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1207 22:45:33.982877       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1207 22:45:33.982949       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1207 22:45:33.984738       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1207 22:45:33.985928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 22:45:33.988170       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1207 22:45:34.004392       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1207 22:46:04.986011       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.990408       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.994096       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.994195       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.997536       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:05.002933       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [241e1affb6fa46a9506a75655e9de27f3e048cac663b0a5a06aef503e652daed] <==
	I1207 22:44:35.482903       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:44:35.541247       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 22:44:35.642069       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 22:44:35.642115       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:44:35.642234       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:44:35.661982       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:44:35.662032       1 server_linux.go:132] "Using iptables Proxier"
	I1207 22:44:35.667042       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:44:35.667374       1 server.go:527] "Version info" version="v1.34.2"
	I1207 22:44:35.667400       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:44:35.668618       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:44:35.668651       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:44:35.668670       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:44:35.668742       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:44:35.668749       1 config.go:309] "Starting node config controller"
	I1207 22:44:35.668757       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:44:35.668764       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:44:35.668748       1 config.go:200] "Starting service config controller"
	I1207 22:44:35.668773       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:44:35.768866       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:44:35.768912       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:44:35.769022       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [de7501abd974adb785be3742251d21b68ca623f7b1fc179faa6032be7e09ebeb] <==
	I1207 22:45:09.306366       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 22:45:09.407512       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 22:45:09.407557       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:45:09.408004       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:45:09.429174       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:45:09.429227       1 server_linux.go:132] "Using iptables Proxier"
	I1207 22:45:09.434622       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:45:09.434936       1 server.go:527] "Version info" version="v1.34.2"
	I1207 22:45:09.434973       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:45:09.436217       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:45:09.436231       1 config.go:200] "Starting service config controller"
	I1207 22:45:09.436245       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:45:09.436251       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:45:09.436274       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:45:09.436289       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:45:09.436398       1 config.go:309] "Starting node config controller"
	I1207 22:45:09.436416       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:45:09.436425       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:45:09.537119       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:45:09.537175       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:45:09.537193       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	E1207 22:45:30.585918       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:45:30.586265       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1207 22:45:30.586266       1 reflector.go:205] "Failed to watch" err="nodes \"functional-463278\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:45:30.586264       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	
	
	==> kube-scheduler [2f92bec0616550c97f8ee1a3211b8813d3424295e7164e828d1c5b432b8d79fe] <==
	I1207 22:45:18.203451       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 22:45:18.203468       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 22:45:18.203454       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1207 22:45:18.203876       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 22:45:18.203919       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 22:45:18.304448       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1207 22:45:18.304470       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 22:45:18.304579       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1207 22:45:30.559615       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:45:30.559636       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:45:30.566816       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 22:45:30.566860       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 22:45:30.566880       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1207 22:45:30.566895       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:45:30.566929       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:45:30.566953       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1207 22:45:30.567950       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 22:45:30.568153       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:45:30.568339       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1207 22:45:30.569325       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:45:30.569581       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 22:45:30.569607       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:45:30.577884       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1207 22:45:30.581596       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1207 22:45:30.582317       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	
	
	==> kube-scheduler [c029b7ba4e0eb3389fbfbfbaf6190b61ae6eebcde41377e62c6e973e8138ad72] <==
	E1207 22:44:26.890579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:44:26.890538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 22:44:26.890639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:44:26.890644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1207 22:44:26.890681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:44:26.890685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 22:44:26.890794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:44:27.715111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 22:44:27.728196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:44:27.769160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 22:44:27.774090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 22:44:27.792212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:44:27.900910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:44:27.908824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1207 22:44:27.921774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:44:27.971171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:44:28.036376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:44:28.098446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1207 22:44:31.286992       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:45:09.024037       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1207 22:45:09.024135       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:45:09.024235       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1207 22:45:09.024258       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1207 22:45:09.024272       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1207 22:45:09.024292       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 07 22:51:53 functional-463278 kubelet[4931]: E1207 22:51:53.790185    4931 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Dec 07 22:51:53 functional-463278 kubelet[4931]:         container myfrontend start failed in pod sp-pod_default(6fc1a713-e1a4-4403-94de-bcd12759e727): ErrImagePull: failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	Dec 07 22:51:53 functional-463278 kubelet[4931]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 22:51:53 functional-463278 kubelet[4931]:  > logger="UnhandledError"
	Dec 07 22:51:53 functional-463278 kubelet[4931]: E1207 22:51:53.790215    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6fc1a713-e1a4-4403-94de-bcd12759e727"
	Dec 07 22:51:53 functional-463278 kubelet[4931]: E1207 22:51:53.892319    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-g6tqf" podUID="868ae89c-0be5-4a7d-8bfe-31984dcc4024"
	Dec 07 22:51:57 functional-463278 kubelet[4931]: I1207 22:51:57.134202    4931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7v52\" (UniqueName: \"kubernetes.io/projected/6311e70b-1eaf-4894-8481-b6038190b988-kube-api-access-l7v52\") pod \"hello-node-connect-7d85dfc575-h4vhx\" (UID: \"6311e70b-1eaf-4894-8481-b6038190b988\") " pod="default/hello-node-connect-7d85dfc575-h4vhx"
	Dec 07 22:51:58 functional-463278 kubelet[4931]: E1207 22:51:58.319437    4931 log.go:32] "PullImage from image service failed" err=<
	Dec 07 22:51:58 functional-463278 kubelet[4931]:         rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	Dec 07 22:51:58 functional-463278 kubelet[4931]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 22:51:58 functional-463278 kubelet[4931]:  > image="kicbase/echo-server:latest"
	Dec 07 22:51:58 functional-463278 kubelet[4931]: E1207 22:51:58.319484    4931 kuberuntime_image.go:43] "Failed to pull image" err=<
	Dec 07 22:51:58 functional-463278 kubelet[4931]:         failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	Dec 07 22:51:58 functional-463278 kubelet[4931]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 22:51:58 functional-463278 kubelet[4931]:  > image="kicbase/echo-server:latest"
	Dec 07 22:51:58 functional-463278 kubelet[4931]: E1207 22:51:58.319561    4931 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Dec 07 22:51:58 functional-463278 kubelet[4931]:         container echo-server start failed in pod hello-node-connect-7d85dfc575-h4vhx_default(6311e70b-1eaf-4894-8481-b6038190b988): ErrImagePull: failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	Dec 07 22:51:58 functional-463278 kubelet[4931]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 22:51:58 functional-463278 kubelet[4931]:  > logger="UnhandledError"
	Dec 07 22:51:58 functional-463278 kubelet[4931]: E1207 22:51:58.319590    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-h4vhx" podUID="6311e70b-1eaf-4894-8481-b6038190b988"
	Dec 07 22:51:58 functional-463278 kubelet[4931]: E1207 22:51:58.844818    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-h4vhx" podUID="6311e70b-1eaf-4894-8481-b6038190b988"
	Dec 07 22:52:00 functional-463278 kubelet[4931]: E1207 22:52:00.892036    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="a6ea4e44-3527-4f0c-9cae-328ed5b798c6"
	Dec 07 22:52:02 functional-463278 kubelet[4931]: E1207 22:52:02.892516    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-kk5fh" podUID="859e28d2-b9b7-4269-b149-83572d39d84f"
	Dec 07 22:52:02 functional-463278 kubelet[4931]: E1207 22:52:02.892673    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xmmkt" podUID="202eaeac-4520-44e8-989e-8b210bdf6bbf"
	Dec 07 22:52:04 functional-463278 kubelet[4931]: E1207 22:52:04.893042    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4lmsb" podUID="d124a65e-90c9-4e59-9b92-97286
1b8b41e"
	
	
	==> storage-provisioner [b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c] <==
	W1207 22:51:40.459746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:42.462071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:42.466015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:44.469366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:44.473118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:46.476372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:46.480975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:48.483548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:48.488272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:50.491175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:50.496576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:52.499442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:52.503312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:54.506096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:54.510232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:56.513458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:56.517968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:58.521228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:58.524793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:00.527607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:00.531539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:02.535013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:02.539771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:04.542741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:04.547375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f17929caca3262a3a1b351d4290ad2ef17b04e0326dd1bbaaaad3c4f6ecc1761] <==
	W1207 22:44:46.734205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:46.737325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 22:44:46.832545       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-463278_fdf61bc9-6000-426f-adc4-cbf24786c29e!
	W1207 22:44:48.739998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:48.743651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:50.747390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:50.751564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:52.754931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:52.758942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:54.761505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:54.766768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:56.770510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:56.774194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:58.777335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:58.782858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:00.786501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:00.790145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:02.793348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:02.798083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:04.800761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:04.804484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:06.807470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:06.811061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.814264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.818406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-463278 -n functional-463278
helpers_test.go:269: (dbg) Run:  kubectl --context functional-463278 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-g6tqf hello-node-connect-7d85dfc575-h4vhx mysql-5bb876957f-kk5fh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-463278 describe pod busybox-mount hello-node-75c85bcc94-g6tqf hello-node-connect-7d85dfc575-h4vhx mysql-5bb876957f-kk5fh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-463278 describe pod busybox-mount hello-node-75c85bcc94-g6tqf hello-node-connect-7d85dfc575-h4vhx mysql-5bb876957f-kk5fh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt: exit status 1 (95.363186ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:56 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  containerd://e75f0ea836e871d2e96be7ca7f972756573bf9e52f1b5f150b60e8b28052bef3
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 07 Dec 2025 22:45:57 +0000
	      Finished:     Sun, 07 Dec 2025 22:45:57 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nlmhb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-nlmhb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m10s  default-scheduler  Successfully assigned default/busybox-mount to functional-463278
	  Normal  Pulling    6m10s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m9s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 625ms (625ms including waiting). Image size: 2395207 bytes.
	  Normal  Created    6m9s   kubelet            Created container: mount-munger
	  Normal  Started    6m9s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-g6tqf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:54 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfpqk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tfpqk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m12s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-g6tqf to functional-463278
	  Warning  Failed     4m36s (x2 over 6m11s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  3m9s (x5 over 6m12s)  kubelet  Pulling image "kicbase/echo-server"
	  Warning  Failed   3m8s (x5 over 6m11s)  kubelet  Error: ErrImagePull
	  Warning  Failed   3m8s (x3 over 5m57s)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff  61s (x21 over 6m11s)  kubelet  Back-off pulling image "kicbase/echo-server"
	  Warning  Failed   61s (x21 over 6m11s)  kubelet  Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-h4vhx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:51:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l7v52 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-l7v52:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  9s    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-h4vhx to functional-463278
	  Normal   Pulling    9s    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     8s    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   8s  kubelet  Error: ErrImagePull
	  Normal   BackOff  8s  kubelet  Back-off pulling image "kicbase/echo-server"
	  Warning  Failed   8s  kubelet  Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-kk5fh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:51:06 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7b56l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7b56l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  60s   default-scheduler  Successfully assigned default/mysql-5bb876957f-kk5fh to functional-463278
	  Warning  Failed     59s   kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  19s (x3 over 60s)  kubelet  Pulling image "docker.io/mysql:5.7"
	  Warning  Failed   18s (x3 over 59s)  kubelet  Error: ErrImagePull
	  Warning  Failed   18s (x2 over 44s)  kubelet  Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff  4s (x3 over 58s)  kubelet  Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed   4s (x3 over 58s)  kubelet  Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:46:04 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkl8s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fkl8s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/nginx-svc to functional-463278
	  Normal   Pulling    3m2s (x5 over 6m2s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m1s (x5 over 6m)    kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   3m1s (x5 over 6m)     kubelet  Error: ErrImagePull
	  Warning  Failed   54s (x20 over 5m59s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  42s (x21 over 5m59s)  kubelet  Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:46:04 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jmx4b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-jmx4b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-463278
	  Normal   Pulling    3m6s (x5 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m5s (x5 over 6m1s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   3m5s (x5 over 6m1s)  kubelet  Error: ErrImagePull
	  Warning  Failed   51s (x20 over 6m)    kubelet  Error: ImagePullBackOff
	  Normal   BackOff  40s (x21 over 6m)    kubelet  Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-4lmsb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xmmkt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-463278 describe pod busybox-mount hello-node-75c85bcc94-g6tqf hello-node-connect-7d85dfc575-h4vhx mysql-5bb876957f-kk5fh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (367.87s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-463278 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-kk5fh" [859e28d2-b9b7-4269-b149-83572d39d84f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
I1207 22:51:09.173281  375863 retry.go:31] will retry after 47.603215319s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-463278 -n functional-463278
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-12-07 23:01:06.568231444 +0000 UTC m=+1806.720897520
functional_test.go:1804: (dbg) Run:  kubectl --context functional-463278 describe po mysql-5bb876957f-kk5fh -n default
functional_test.go:1804: (dbg) kubectl --context functional-463278 describe po mysql-5bb876957f-kk5fh -n default:
Name:             mysql-5bb876957f-kk5fh
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-463278/192.168.49.2
Start Time:       Sun, 07 Dec 2025 22:51:06 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7b56l (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7b56l:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-kk5fh to functional-463278
Warning  Failed     8m36s (x3 over 9m44s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling  7m3s (x5 over 10m)    kubelet  Pulling image "docker.io/mysql:5.7"
Warning  Failed   7m2s (x2 over 9m59s)  kubelet  Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   7m2s (x5 over 9m59s)    kubelet  Error: ErrImagePull
Warning  Failed   4m54s (x20 over 9m58s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m42s (x21 over 9m58s)  kubelet  Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-463278 logs mysql-5bb876957f-kk5fh -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-463278 logs mysql-5bb876957f-kk5fh -n default: exit status 1 (63.455691ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-kk5fh" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-463278 logs mysql-5bb876957f-kk5fh -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-463278
helpers_test.go:243: (dbg) docker inspect functional-463278:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478",
	        "Created": "2025-12-07T22:44:12.758544687Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 412276,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T22:44:12.792640745Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/hosts",
	        "LogPath": "/var/lib/docker/containers/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478/5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478-json.log",
	        "Name": "/functional-463278",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-463278:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-463278",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5ffb24e5047194101cdaa375732e53f33c62b732f13de28d9cac4f210f10b478",
	                "LowerDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b-init/diff:/var/lib/docker/overlay2/3cf0cc8fa8911fedbf601d360c4398569b0f30cd394076654dad88848c4dafaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4b4e89f45bed7d69a81d7b06daeaf5fb7ed15e0ee8f064f2a2a68038c9163a1b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-463278",
	                "Source": "/var/lib/docker/volumes/functional-463278/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-463278",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-463278",
	                "name.minikube.sigs.k8s.io": "functional-463278",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "73fcdda28fbda839afd47005c2057d1600198ad8b4bf801d9a3ceddf616c1b58",
	            "SandboxKey": "/var/run/docker/netns/73fcdda28fbd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-463278": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "018e6fa20a7dd3d69ef98e40b8862683b0183989c49c03038371ca101d5bfc97",
	                    "EndpointID": "e99fd6022d692fa69afcf7bcf689b400a636b35ef2585d88efa596fe15564320",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "56:db:6f:c4:2e:40",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-463278",
	                        "5ffb24e50471"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-463278 -n functional-463278
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-463278 logs -n 25: (1.150196951s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-463278 ssh findmnt -T /mount2                                                               │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh            │ functional-463278 ssh cat /etc/hostname                                                                │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ ssh            │ functional-463278 ssh findmnt -T /mount3                                                               │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │ 07 Dec 25 22:46 UTC │
	│ tunnel         │ functional-463278 tunnel --alsologtostderr                                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ tunnel         │ functional-463278 tunnel --alsologtostderr                                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ mount          │ -p functional-463278 --kill=true                                                                       │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-463278 --alsologtostderr -v=1                                         │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ tunnel         │ functional-463278 tunnel --alsologtostderr                                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:46 UTC │                     │
	│ addons         │ functional-463278 addons list                                                                          │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ addons         │ functional-463278 addons list -o json                                                                  │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ update-context │ functional-463278 update-context --alsologtostderr -v=2                                                │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ update-context │ functional-463278 update-context --alsologtostderr -v=2                                                │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ update-context │ functional-463278 update-context --alsologtostderr -v=2                                                │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-463278 image ls --format short --alsologtostderr                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-463278 image ls --format yaml --alsologtostderr                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh            │ functional-463278 ssh pgrep buildkitd                                                                  │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ image          │ functional-463278 image build -t localhost/my-image:functional-463278 testdata/build --alsologtostderr │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-463278 image ls                                                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-463278 image ls --format json --alsologtostderr                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-463278 image ls --format table --alsologtostderr                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ service        │ functional-463278 service list                                                                         │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │ 07 Dec 25 22:55 UTC │
	│ service        │ functional-463278 service list -o json                                                                 │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │ 07 Dec 25 22:55 UTC │
	│ service        │ functional-463278 service --namespace=default --https --url hello-node                                 │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │                     │
	│ service        │ functional-463278 service hello-node --url --format={{.IP}}                                            │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │                     │
	│ service        │ functional-463278 service hello-node --url                                                             │ functional-463278 │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:45:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:45:57.091874  423436 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:45:57.091995  423436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:45:57.092005  423436 out.go:374] Setting ErrFile to fd 2...
	I1207 22:45:57.092011  423436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:45:57.092185  423436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 22:45:57.092601  423436 out.go:368] Setting JSON to false
	I1207 22:45:57.093579  423436 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5306,"bootTime":1765142251,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:45:57.093633  423436 start.go:143] virtualization: kvm guest
	I1207 22:45:57.095151  423436 out.go:179] * [functional-463278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:45:57.096384  423436 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:45:57.096396  423436 notify.go:221] Checking for updates...
	I1207 22:45:57.098316  423436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:45:57.099355  423436 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 22:45:57.100301  423436 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 22:45:57.101400  423436 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:45:57.102435  423436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:45:57.103770  423436 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 22:45:57.104272  423436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:45:57.126808  423436 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:45:57.126897  423436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:45:57.180398  423436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:62 SystemTime:2025-12-07 22:45:57.171174532 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:45:57.180563  423436 docker.go:319] overlay module found
	I1207 22:45:57.181969  423436 out.go:179] * Using the docker driver based on existing profile
	I1207 22:45:57.182881  423436 start.go:309] selected driver: docker
	I1207 22:45:57.182896  423436 start.go:927] validating driver "docker" against &{Name:functional-463278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-463278 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:45:57.183005  423436 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:45:57.183099  423436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:45:57.235635  423436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:72 SystemTime:2025-12-07 22:45:57.226815895 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:45:57.236268  423436 cni.go:84] Creating CNI manager for ""
	I1207 22:45:57.236345  423436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 22:45:57.236398  423436 start.go:353] cluster config:
	{Name:functional-463278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-463278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:45:57.238530  423436 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e75f0ea836e87       56cc512116c8f       15 minutes ago      Exited              mount-munger              0                   6198658580680       busybox-mount                               default
	2bb308b96eda4       a5f569d49a979       15 minutes ago      Running             kube-apiserver            0                   afb596781aa30       kube-apiserver-functional-463278            kube-system
	a4a323a1e2f64       01e8bacf0f500       15 minutes ago      Running             kube-controller-manager   2                   5eb66decae12e       kube-controller-manager-functional-463278   kube-system
	8818bd5eb1cbf       a3e246e9556e9       15 minutes ago      Running             etcd                      1                   25e3962e39ce3       etcd-functional-463278                      kube-system
	03e16e4fab102       01e8bacf0f500       15 minutes ago      Exited              kube-controller-manager   1                   5eb66decae12e       kube-controller-manager-functional-463278   kube-system
	b99f837de77a9       6e38f40d628db       15 minutes ago      Running             storage-provisioner       1                   88b5c757605a6       storage-provisioner                         kube-system
	2f92bec061655       88320b5498ff2       15 minutes ago      Running             kube-scheduler            1                   d5d9a4864e7cc       kube-scheduler-functional-463278            kube-system
	8ad918e337f8d       52546a367cc9e       15 minutes ago      Running             coredns                   1                   b1f6971889076       coredns-66bc5c9577-h8rkq                    kube-system
	c01638eb7603f       409467f978b4a       15 minutes ago      Running             kindnet-cni               1                   9fbdd61d5e1fa       kindnet-8r765                               kube-system
	de7501abd974a       8aa150647e88a       15 minutes ago      Running             kube-proxy                1                   a67aeb63180da       kube-proxy-k8s49                            kube-system
	c8106c646b2b5       52546a367cc9e       16 minutes ago      Exited              coredns                   0                   b1f6971889076       coredns-66bc5c9577-h8rkq                    kube-system
	f17929caca326       6e38f40d628db       16 minutes ago      Exited              storage-provisioner       0                   88b5c757605a6       storage-provisioner                         kube-system
	1aebb420b7e16       409467f978b4a       16 minutes ago      Exited              kindnet-cni               0                   9fbdd61d5e1fa       kindnet-8r765                               kube-system
	241e1affb6fa4       8aa150647e88a       16 minutes ago      Exited              kube-proxy                0                   a67aeb63180da       kube-proxy-k8s49                            kube-system
	c029b7ba4e0eb       88320b5498ff2       16 minutes ago      Exited              kube-scheduler            0                   d5d9a4864e7cc       kube-scheduler-functional-463278            kube-system
	bbe9f6a088ec6       a3e246e9556e9       16 minutes ago      Exited              etcd                      0                   25e3962e39ce3       etcd-functional-463278                      kube-system
	
	
	==> containerd <==
	Dec 07 23:00:49 functional-463278 containerd[3808]: time="2025-12-07T23:00:49.706637938Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d26d291_e543_46cb_b3eb_fb46c80d9971.slice/cri-containerd-8ad918e337f8d1b8a5bf5bd69ce10cebeb1144cc6e19451beb4971b6e85eb0eb.scope/hugetlb.1GB.events\""
	Dec 07 23:00:49 functional-463278 containerd[3808]: time="2025-12-07T23:00:49.707309907Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf470a8_cc57_40ba_8f7f_f530ce462a2e.slice/cri-containerd-b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c.scope/hugetlb.2MB.events\""
	Dec 07 23:00:49 functional-463278 containerd[3808]: time="2025-12-07T23:00:49.707390330Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf470a8_cc57_40ba_8f7f_f530ce462a2e.slice/cri-containerd-b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c.scope/hugetlb.1GB.events\""
	Dec 07 23:00:49 functional-463278 containerd[3808]: time="2025-12-07T23:00:49.708036533Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e9ebbd3d601e2f2313533693bca03e6.slice/cri-containerd-2f92bec0616550c97f8ee1a3211b8813d3424295e7164e828d1c5b432b8d79fe.scope/hugetlb.2MB.events\""
	Dec 07 23:00:49 functional-463278 containerd[3808]: time="2025-12-07T23:00:49.708106971Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e9ebbd3d601e2f2313533693bca03e6.slice/cri-containerd-2f92bec0616550c97f8ee1a3211b8813d3424295e7164e828d1c5b432b8d79fe.scope/hugetlb.1GB.events\""
	Dec 07 23:00:49 functional-463278 containerd[3808]: time="2025-12-07T23:00:49.708736611Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab88a548_17f8_47a7_a570_18c415f65178.slice/cri-containerd-de7501abd974adb785be3742251d21b68ca623f7b1fc179faa6032be7e09ebeb.scope/hugetlb.2MB.events\""
	Dec 07 23:00:49 functional-463278 containerd[3808]: time="2025-12-07T23:00:49.708810586Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab88a548_17f8_47a7_a570_18c415f65178.slice/cri-containerd-de7501abd974adb785be3742251d21b68ca623f7b1fc179faa6032be7e09ebeb.scope/hugetlb.1GB.events\""
	Dec 07 23:00:49 functional-463278 containerd[3808]: time="2025-12-07T23:00:49.709425156Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac5509e6a3d6be05ab3675ee8e30959.slice/cri-containerd-a4a323a1e2f645854da5df61fb2239f00b95df2847c639bb7e82e2a9aee68198.scope/hugetlb.2MB.events\""
	Dec 07 23:00:49 functional-463278 containerd[3808]: time="2025-12-07T23:00:49.709513105Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac5509e6a3d6be05ab3675ee8e30959.slice/cri-containerd-a4a323a1e2f645854da5df61fb2239f00b95df2847c639bb7e82e2a9aee68198.scope/hugetlb.1GB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.724676982Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d26d291_e543_46cb_b3eb_fb46c80d9971.slice/cri-containerd-8ad918e337f8d1b8a5bf5bd69ce10cebeb1144cc6e19451beb4971b6e85eb0eb.scope/hugetlb.2MB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.724817957Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d26d291_e543_46cb_b3eb_fb46c80d9971.slice/cri-containerd-8ad918e337f8d1b8a5bf5bd69ce10cebeb1144cc6e19451beb4971b6e85eb0eb.scope/hugetlb.1GB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.725568557Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf470a8_cc57_40ba_8f7f_f530ce462a2e.slice/cri-containerd-b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c.scope/hugetlb.2MB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.725641608Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf470a8_cc57_40ba_8f7f_f530ce462a2e.slice/cri-containerd-b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c.scope/hugetlb.1GB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.726365547Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e9ebbd3d601e2f2313533693bca03e6.slice/cri-containerd-2f92bec0616550c97f8ee1a3211b8813d3424295e7164e828d1c5b432b8d79fe.scope/hugetlb.2MB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.726485309Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e9ebbd3d601e2f2313533693bca03e6.slice/cri-containerd-2f92bec0616550c97f8ee1a3211b8813d3424295e7164e828d1c5b432b8d79fe.scope/hugetlb.1GB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.727208314Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab88a548_17f8_47a7_a570_18c415f65178.slice/cri-containerd-de7501abd974adb785be3742251d21b68ca623f7b1fc179faa6032be7e09ebeb.scope/hugetlb.2MB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.727308406Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab88a548_17f8_47a7_a570_18c415f65178.slice/cri-containerd-de7501abd974adb785be3742251d21b68ca623f7b1fc179faa6032be7e09ebeb.scope/hugetlb.1GB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.727976196Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac5509e6a3d6be05ab3675ee8e30959.slice/cri-containerd-a4a323a1e2f645854da5df61fb2239f00b95df2847c639bb7e82e2a9aee68198.scope/hugetlb.2MB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.728059565Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac5509e6a3d6be05ab3675ee8e30959.slice/cri-containerd-a4a323a1e2f645854da5df61fb2239f00b95df2847c639bb7e82e2a9aee68198.scope/hugetlb.1GB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.728833992Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefebb8e185ecbbce9100dae8ce48b206.slice/cri-containerd-2bb308b96eda42ec0ad4cc021e353ed01f9914a9f38712d6fe8e6243d819e3db.scope/hugetlb.2MB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.728960015Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefebb8e185ecbbce9100dae8ce48b206.slice/cri-containerd-2bb308b96eda42ec0ad4cc021e353ed01f9914a9f38712d6fe8e6243d819e3db.scope/hugetlb.1GB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.729626353Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-podd60fdd96_54b9_4774_931c_7c2b63e717cd.slice/cri-containerd-c01638eb7603f7b8f2234eb7b4f686364cfd3fa038a503c53377d321b2d25138.scope/hugetlb.2MB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.729732780Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-podd60fdd96_54b9_4774_931c_7c2b63e717cd.slice/cri-containerd-c01638eb7603f7b8f2234eb7b4f686364cfd3fa038a503c53377d321b2d25138.scope/hugetlb.1GB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.730413390Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b0c90499e6f16c1e1ef709c99f74a4.slice/cri-containerd-8818bd5eb1cbf3deefd6eed443a17d18a8bf5ee74e292d80063167b747a29bbb.scope/hugetlb.2MB.events\""
	Dec 07 23:00:59 functional-463278 containerd[3808]: time="2025-12-07T23:00:59.730501471Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b0c90499e6f16c1e1ef709c99f74a4.slice/cri-containerd-8818bd5eb1cbf3deefd6eed443a17d18a8bf5ee74e292d80063167b747a29bbb.scope/hugetlb.1GB.events\""
	
	
	==> coredns [8ad918e337f8d1b8a5bf5bd69ce10cebeb1144cc6e19451beb4971b6e85eb0eb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49423 - 38940 "HINFO IN 5983710415482786725.2619087190349043031. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.056989451s
	
	
	==> coredns [c8106c646b2b5fc16823350d043dc9c6ec16d210e799856486b7c76fe08c83b2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46018 - 48176 "HINFO IN 6865890663098146517.2279839593742268272. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.090558582s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-463278
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-463278
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=functional-463278
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T22_44_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 22:44:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-463278
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:00:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 22:56:02 +0000   Sun, 07 Dec 2025 22:44:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 22:56:02 +0000   Sun, 07 Dec 2025 22:44:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 22:56:02 +0000   Sun, 07 Dec 2025 22:44:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 22:56:02 +0000   Sun, 07 Dec 2025 22:44:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-463278
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                f5a83adb-14f5-4487-b0c9-2f0e171fa46e
	  Boot ID:                    60b18ead-4d86-44c7-8f9b-425eebb82278
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-g6tqf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     hello-node-connect-7d85dfc575-h4vhx           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  default                     mysql-5bb876957f-kk5fh                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-h8rkq                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 etcd-functional-463278                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-8r765                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-functional-463278              250m (3%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-functional-463278     200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-k8s49                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-functional-463278              100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4lmsb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xmmkt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node functional-463278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node functional-463278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node functional-463278 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node functional-463278 event: Registered Node functional-463278 in Controller
	  Normal  NodeReady                16m                kubelet          Node functional-463278 status is now: NodeReady
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node functional-463278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node functional-463278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node functional-463278 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node functional-463278 event: Registered Node functional-463278 in Controller
	
	
	==> dmesg <==
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 21:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 9d 47 68 73 f8 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 51 85 40 99 2c 08 06
	[Dec 7 22:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +0.003775] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 fe e4 4c 66 38 08 06
	[ +24.208165] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	[Dec 7 22:26] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 26 10 e0 03 10 08 06
	[  +0.000351] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +6.093825] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 95 3b ef b3 e0 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e ac 25 ad 6e 00 08 06
	[ +21.488023] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e e7 41 9a 21 38 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	
	
	==> etcd [8818bd5eb1cbf3deefd6eed443a17d18a8bf5ee74e292d80063167b747a29bbb] <==
	{"level":"warn","ts":"2025-12-07T22:45:30.039545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.046098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.059538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.065500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.071745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.077937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.084034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.090594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.097141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.104352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.110852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.117485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.124202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.130747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.137443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.153632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.160669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.168476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:30.212502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48982","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T22:55:29.758715Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1193}
	{"level":"info","ts":"2025-12-07T22:55:29.777659Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1193,"took":"18.645782ms","hash":2407612845,"current-db-size-bytes":3903488,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":1949696,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-12-07T22:55:29.777713Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2407612845,"revision":1193,"compact-revision":-1}
	{"level":"info","ts":"2025-12-07T23:00:29.763520Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1615}
	{"level":"info","ts":"2025-12-07T23:00:29.766603Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1615,"took":"2.695773ms","hash":2056168582,"current-db-size-bytes":3903488,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2293760,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-12-07T23:00:29.766641Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2056168582,"revision":1615,"compact-revision":1193}
	
	
	==> etcd [bbe9f6a088ec6daf4376fac032c87e27c49c98b3182d7279277f7dba7f36d9e7] <==
	{"level":"warn","ts":"2025-12-07T22:44:26.429250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.436264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.442173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.452972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.459855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.466810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:26.514205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35976","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T22:45:09.161759Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-07T22:45:09.161847Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-463278","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-07T22:45:09.162016Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T22:45:16.163516Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T22:45:16.163619Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T22:45:16.163658Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-07T22:45:16.163688Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-07T22:45:16.163688Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163818Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163836Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163869Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T22:45:16.163875Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-07T22:45:16.163881Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-07T22:45:16.163885Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T22:45:16.166508Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-07T22:45:16.166565Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T22:45:16.166592Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-07T22:45:16.166598Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-463278","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 23:01:07 up  1:43,  0 user,  load average: 0.18, 0.19, 0.53
	Linux functional-463278 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1aebb420b7e165d65642a376d6c5580ab3f33735dedae31a4ccec446fd1ecdb3] <==
	I1207 22:44:35.954967       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 22:44:35.955187       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1207 22:44:35.955345       1 main.go:148] setting mtu 1500 for CNI 
	I1207 22:44:35.955368       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 22:44:35.955391       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T22:44:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 22:44:36.182994       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 22:44:36.183470       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 22:44:36.183518       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 22:44:36.183832       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 22:44:36.484277       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 22:44:36.484299       1 metrics.go:72] Registering metrics
	I1207 22:44:36.484346       1 controller.go:711] "Syncing nftables rules"
	I1207 22:44:46.184838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:44:46.184893       1 main.go:301] handling current node
	I1207 22:44:56.191643       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:44:56.191687       1 main.go:301] handling current node
	I1207 22:45:06.188919       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:45:06.188951       1 main.go:301] handling current node
	
	
	==> kindnet [c01638eb7603f7b8f2234eb7b4f686364cfd3fa038a503c53377d321b2d25138] <==
	I1207 22:58:59.593310       1 main.go:301] handling current node
	I1207 22:59:09.598658       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:59:09.598720       1 main.go:301] handling current node
	I1207 22:59:19.598134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:59:19.598166       1 main.go:301] handling current node
	I1207 22:59:29.592314       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:59:29.592346       1 main.go:301] handling current node
	I1207 22:59:39.600467       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:59:39.600505       1 main.go:301] handling current node
	I1207 22:59:49.592763       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:59:49.592792       1 main.go:301] handling current node
	I1207 22:59:59.598126       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:59:59.598169       1 main.go:301] handling current node
	I1207 23:00:09.594581       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:00:09.594613       1 main.go:301] handling current node
	I1207 23:00:19.591998       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:00:19.592033       1 main.go:301] handling current node
	I1207 23:00:29.594313       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:00:29.594369       1 main.go:301] handling current node
	I1207 23:00:39.600396       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:00:39.600438       1 main.go:301] handling current node
	I1207 23:00:49.591354       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:00:49.591398       1 main.go:301] handling current node
	I1207 23:00:59.594643       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:00:59.594677       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2bb308b96eda42ec0ad4cc021e353ed01f9914a9f38712d6fe8e6243d819e3db] <==
	I1207 22:45:30.639727       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1207 22:45:30.639750       1 policy_source.go:240] refreshing policies
	I1207 22:45:30.642921       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 22:45:30.676523       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 22:45:30.683392       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 22:45:30.877540       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:30.877540       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:31.540362       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1207 22:45:31.745456       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1207 22:45:31.746522       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 22:45:31.750270       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 22:45:32.224588       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 22:45:32.307847       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 22:45:32.351106       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 22:45:32.356452       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 22:45:39.118928       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 22:45:49.974663       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.22.34"}
	I1207 22:45:54.183552       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.226.250"}
	I1207 22:46:04.555257       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.91.117"}
	I1207 22:46:04.940065       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 22:46:05.042815       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.100.228"}
	I1207 22:46:05.053530       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.49.67"}
	I1207 22:51:06.223470       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.123.117"}
	I1207 22:51:57.073341       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.24.204"}
	I1207 22:55:30.584933       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [03e16e4fab102ebca442c6bfb3bbf1506144ba4ce15c3842a23b812ecedd698e] <==
	I1207 22:45:18.193158       1 shared_informer.go:349] "Waiting for caches to sync" controller="taint"
	I1207 22:45:18.195271       1 controllermanager.go:781] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1207 22:45:18.195294       1 controllermanager.go:759] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1207 22:45:18.195400       1 attach_detach_controller.go:336] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1207 22:45:18.195417       1 shared_informer.go:349] "Waiting for caches to sync" controller="attach detach"
	I1207 22:45:18.197804       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1207 22:45:18.197829       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-serving"
	I1207 22:45:18.197849       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.198255       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1207 22:45:18.198279       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-client"
	I1207 22:45:18.198288       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.198813       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1207 22:45:18.198845       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kube-apiserver-client"
	I1207 22:45:18.198866       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.199293       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1207 22:45:18.199316       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="podcertificaterequest-cleaner-controller" requiredFeatureGates=["PodCertificateRequest"]
	I1207 22:45:18.199490       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1207 22:45:18.199502       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-legacy-unknown"
	I1207 22:45:18.199521       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 22:45:18.454250       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1207 22:45:18.454300       1 controllermanager.go:781] "Started controller" controller="node-ipam-controller"
	I1207 22:45:18.454309       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="node-route-controller"
	I1207 22:45:18.454514       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1207 22:45:18.454532       1 shared_informer.go:349] "Waiting for caches to sync" controller="node"
	F1207 22:45:18.454685       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/pv-protection-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-controller-manager [a4a323a1e2f645854da5df61fb2239f00b95df2847c639bb7e82e2a9aee68198] <==
	I1207 22:45:33.954722       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1207 22:45:33.981269       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1207 22:45:33.981349       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1207 22:45:33.981379       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1207 22:45:33.981353       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 22:45:33.981423       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1207 22:45:33.981428       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1207 22:45:33.981439       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1207 22:45:33.981499       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1207 22:45:33.981570       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1207 22:45:33.982774       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1207 22:45:33.982803       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1207 22:45:33.982858       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1207 22:45:33.982877       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1207 22:45:33.982949       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1207 22:45:33.984738       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1207 22:45:33.985928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 22:45:33.988170       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1207 22:45:34.004392       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1207 22:46:04.986011       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.990408       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.994096       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.994195       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:04.997536       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:46:05.002933       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [241e1affb6fa46a9506a75655e9de27f3e048cac663b0a5a06aef503e652daed] <==
	I1207 22:44:35.482903       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:44:35.541247       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 22:44:35.642069       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 22:44:35.642115       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:44:35.642234       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:44:35.661982       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:44:35.662032       1 server_linux.go:132] "Using iptables Proxier"
	I1207 22:44:35.667042       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:44:35.667374       1 server.go:527] "Version info" version="v1.34.2"
	I1207 22:44:35.667400       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:44:35.668618       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:44:35.668651       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:44:35.668670       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:44:35.668742       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:44:35.668749       1 config.go:309] "Starting node config controller"
	I1207 22:44:35.668757       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:44:35.668764       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:44:35.668748       1 config.go:200] "Starting service config controller"
	I1207 22:44:35.668773       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:44:35.768866       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:44:35.768912       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:44:35.769022       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [de7501abd974adb785be3742251d21b68ca623f7b1fc179faa6032be7e09ebeb] <==
	I1207 22:45:09.306366       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 22:45:09.407512       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 22:45:09.407557       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:45:09.408004       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:45:09.429174       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:45:09.429227       1 server_linux.go:132] "Using iptables Proxier"
	I1207 22:45:09.434622       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:45:09.434936       1 server.go:527] "Version info" version="v1.34.2"
	I1207 22:45:09.434973       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:45:09.436217       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:45:09.436231       1 config.go:200] "Starting service config controller"
	I1207 22:45:09.436245       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:45:09.436251       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:45:09.436274       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:45:09.436289       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:45:09.436398       1 config.go:309] "Starting node config controller"
	I1207 22:45:09.436416       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:45:09.436425       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:45:09.537119       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:45:09.537175       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:45:09.537193       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	E1207 22:45:30.585918       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:45:30.586265       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1207 22:45:30.586266       1 reflector.go:205] "Failed to watch" err="nodes \"functional-463278\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:45:30.586264       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	
	
	==> kube-scheduler [2f92bec0616550c97f8ee1a3211b8813d3424295e7164e828d1c5b432b8d79fe] <==
	I1207 22:45:18.203451       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 22:45:18.203468       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 22:45:18.203454       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1207 22:45:18.203876       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 22:45:18.203919       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 22:45:18.304448       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1207 22:45:18.304470       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 22:45:18.304579       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1207 22:45:30.559615       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:45:30.559636       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:45:30.566816       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 22:45:30.566860       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 22:45:30.566880       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1207 22:45:30.566895       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:45:30.566929       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:45:30.566953       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1207 22:45:30.567950       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 22:45:30.568153       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:45:30.568339       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1207 22:45:30.569325       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:45:30.569581       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 22:45:30.569607       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:45:30.577884       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1207 22:45:30.581596       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1207 22:45:30.582317       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	
	
	==> kube-scheduler [c029b7ba4e0eb3389fbfbfbaf6190b61ae6eebcde41377e62c6e973e8138ad72] <==
	E1207 22:44:26.890579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:44:26.890538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 22:44:26.890639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:44:26.890644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1207 22:44:26.890681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:44:26.890685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 22:44:26.890794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:44:27.715111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 22:44:27.728196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:44:27.769160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 22:44:27.774090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 22:44:27.792212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:44:27.900910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:44:27.908824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1207 22:44:27.921774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:44:27.971171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:44:28.036376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:44:28.098446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1207 22:44:31.286992       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:45:09.024037       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1207 22:45:09.024135       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:45:09.024235       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1207 22:45:09.024258       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1207 22:45:09.024272       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1207 22:45:09.024292       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 07 23:00:19 functional-463278 kubelet[4931]: E1207 23:00:19.893228    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="a6ea4e44-3527-4f0c-9cae-328ed5b798c6"
	Dec 07 23:00:20 functional-463278 kubelet[4931]: E1207 23:00:20.892558    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-h4vhx" podUID="6311e70b-1eaf-4894-8481-b6038190b988"
	Dec 07 23:00:25 functional-463278 kubelet[4931]: E1207 23:00:25.891601    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-g6tqf" podUID="868ae89c-0be5-4a7d-8bfe-31984dcc4024"
	Dec 07 23:00:25 functional-463278 kubelet[4931]: E1207 23:00:25.892260    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xmmkt" podUID="202eaeac-4520-44e8-989e-8b210bdf6bbf"
	Dec 07 23:00:30 functional-463278 kubelet[4931]: E1207 23:00:30.892672    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-kk5fh" podUID="859e28d2-b9b7-4269-b149-83572d39d84f"
	Dec 07 23:00:31 functional-463278 kubelet[4931]: E1207 23:00:31.892509    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6fc1a713-e1a4-4403-94de-bcd12759e727"
	Dec 07 23:00:32 functional-463278 kubelet[4931]: E1207 23:00:32.893011    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4lmsb" podUID="d124a65e-90c9-4e59-9b92-97286
1b8b41e"
	Dec 07 23:00:34 functional-463278 kubelet[4931]: E1207 23:00:34.892559    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="a6ea4e44-3527-4f0c-9cae-328ed5b798c6"
	Dec 07 23:00:35 functional-463278 kubelet[4931]: E1207 23:00:35.891739    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-h4vhx" podUID="6311e70b-1eaf-4894-8481-b6038190b988"
	Dec 07 23:00:37 functional-463278 kubelet[4931]: E1207 23:00:37.892049    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-g6tqf" podUID="868ae89c-0be5-4a7d-8bfe-31984dcc4024"
	Dec 07 23:00:39 functional-463278 kubelet[4931]: E1207 23:00:39.892715    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xmmkt" podUID="202eaeac-4520-44e8-989e-8b210bdf6bbf"
	Dec 07 23:00:41 functional-463278 kubelet[4931]: E1207 23:00:41.895813    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-kk5fh" podUID="859e28d2-b9b7-4269-b149-83572d39d84f"
	Dec 07 23:00:43 functional-463278 kubelet[4931]: E1207 23:00:43.891906    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6fc1a713-e1a4-4403-94de-bcd12759e727"
	Dec 07 23:00:45 functional-463278 kubelet[4931]: E1207 23:00:45.893127    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="a6ea4e44-3527-4f0c-9cae-328ed5b798c6"
	Dec 07 23:00:46 functional-463278 kubelet[4931]: E1207 23:00:46.893013    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4lmsb" podUID="d124a65e-90c9-4e59-9b92-97286
1b8b41e"
	Dec 07 23:00:47 functional-463278 kubelet[4931]: E1207 23:00:47.895153    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-h4vhx" podUID="6311e70b-1eaf-4894-8481-b6038190b988"
	Dec 07 23:00:48 functional-463278 kubelet[4931]: E1207 23:00:48.892198    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-g6tqf" podUID="868ae89c-0be5-4a7d-8bfe-31984dcc4024"
	Dec 07 23:00:52 functional-463278 kubelet[4931]: E1207 23:00:52.892262    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xmmkt" podUID="202eaeac-4520-44e8-989e-8b210bdf6bbf"
	Dec 07 23:00:56 functional-463278 kubelet[4931]: E1207 23:00:56.892972    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-kk5fh" podUID="859e28d2-b9b7-4269-b149-83572d39d84f"
	Dec 07 23:00:58 functional-463278 kubelet[4931]: E1207 23:00:58.891758    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6fc1a713-e1a4-4403-94de-bcd12759e727"
	Dec 07 23:00:58 functional-463278 kubelet[4931]: E1207 23:00:58.892360    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="a6ea4e44-3527-4f0c-9cae-328ed5b798c6"
	Dec 07 23:00:58 functional-463278 kubelet[4931]: E1207 23:00:58.892369    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4lmsb" podUID="d124a65e-90c9-4e59-9b92-97286
1b8b41e"
	Dec 07 23:01:01 functional-463278 kubelet[4931]: E1207 23:01:01.891968    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-h4vhx" podUID="6311e70b-1eaf-4894-8481-b6038190b988"
	Dec 07 23:01:02 functional-463278 kubelet[4931]: E1207 23:01:02.892170    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-g6tqf" podUID="868ae89c-0be5-4a7d-8bfe-31984dcc4024"
	Dec 07 23:01:03 functional-463278 kubelet[4931]: E1207 23:01:03.893368    4931 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xmmkt" podUID="202eaeac-4520-44e8-989e-8b210bdf6bbf"
	
	
	==> storage-provisioner [b99f837de77a93ec0a46cbca2aaa5e50fe6a42771eb16f9bc6427a5396f37b5c] <==
	W1207 23:00:42.325594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:44.328740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:44.332494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:46.335107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:46.338659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:48.341728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:48.345272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:50.347782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:50.351619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:52.354980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:52.358294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:54.360640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:54.364473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:56.367748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:56.372475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:58.374983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:00:58.378404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:00.381325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:00.386960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:02.390424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:02.394227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:04.397092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:04.400536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:06.404047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:01:06.407762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f17929caca3262a3a1b351d4290ad2ef17b04e0326dd1bbaaaad3c4f6ecc1761] <==
	W1207 22:44:46.734205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:46.737325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 22:44:46.832545       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-463278_fdf61bc9-6000-426f-adc4-cbf24786c29e!
	W1207 22:44:48.739998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:48.743651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:50.747390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:50.751564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:52.754931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:52.758942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:54.761505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:54.766768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:56.770510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:56.774194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:58.777335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:58.782858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:00.786501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:00.790145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:02.793348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:02.798083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:04.800761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:04.804484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:06.807470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:06.811061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.814264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.818406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-463278 -n functional-463278
helpers_test.go:269: (dbg) Run:  kubectl --context functional-463278 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-g6tqf hello-node-connect-7d85dfc575-h4vhx mysql-5bb876957f-kk5fh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-463278 describe pod busybox-mount hello-node-75c85bcc94-g6tqf hello-node-connect-7d85dfc575-h4vhx mysql-5bb876957f-kk5fh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-463278 describe pod busybox-mount hello-node-75c85bcc94-g6tqf hello-node-connect-7d85dfc575-h4vhx mysql-5bb876957f-kk5fh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt: exit status 1 (90.965879ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:56 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  containerd://e75f0ea836e871d2e96be7ca7f972756573bf9e52f1b5f150b60e8b28052bef3
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 07 Dec 2025 22:45:57 +0000
	      Finished:     Sun, 07 Dec 2025 22:45:57 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nlmhb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-nlmhb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  15m   default-scheduler  Successfully assigned default/busybox-mount to functional-463278
	  Normal  Pulling    15m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     15m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 625ms (625ms including waiting). Image size: 2395207 bytes.
	  Normal  Created    15m   kubelet            Created container: mount-munger
	  Normal  Started    15m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-g6tqf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:54 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfpqk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tfpqk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-g6tqf to functional-463278
	  Warning  Failed     13m (x2 over 15m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  12m (x5 over 15m)  kubelet  Pulling image "kicbase/echo-server"
	  Warning  Failed   12m (x5 over 15m)  kubelet  Error: ErrImagePull
	  Warning  Failed   12m (x3 over 14m)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff  6s (x65 over 15m)  kubelet  Back-off pulling image "kicbase/echo-server"
	  Warning  Failed   6s (x65 over 15m)  kubelet  Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-h4vhx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:51:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l7v52 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-l7v52:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m11s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-h4vhx to functional-463278
	  Normal   Pulling    6m23s (x5 over 9m11s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m22s (x5 over 9m10s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   6m22s (x5 over 9m10s)   kubelet  Error: ErrImagePull
	  Warning  Failed   4m6s (x20 over 9m10s)   kubelet  Error: ImagePullBackOff
	  Normal   BackOff  3m51s (x21 over 9m10s)  kubelet  Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-kk5fh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:51:06 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7b56l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7b56l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-kk5fh to functional-463278
	  Warning  Failed     8m38s (x3 over 9m46s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  7m5s (x5 over 10m)  kubelet  Pulling image "docker.io/mysql:5.7"
	  Warning  Failed   7m4s (x2 over 10m)  kubelet  Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   7m4s (x5 over 10m)    kubelet  Error: ErrImagePull
	  Warning  Failed   4m56s (x20 over 10m)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m44s (x21 over 10m)  kubelet  Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:46:04 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkl8s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fkl8s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/nginx-svc to functional-463278
	  Normal   Pulling    12m (x5 over 15m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     12m (x5 over 15m)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   12m (x5 over 15m)     kubelet  Error: ErrImagePull
	  Normal   BackOff  4m58s (x42 over 15m)  kubelet  Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed   4m58s (x42 over 15m)  kubelet  Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-463278/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:46:04 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jmx4b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-jmx4b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/sp-pod to functional-463278
	  Normal   Pulling    12m (x5 over 15m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     12m (x5 over 15m)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   12m (x5 over 15m)     kubelet  Error: ErrImagePull
	  Normal   BackOff  4m59s (x41 over 15m)  kubelet  Back-off pulling image "docker.io/nginx"
	  Warning  Failed   4m59s (x41 over 15m)  kubelet  Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-4lmsb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xmmkt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-463278 describe pod busybox-mount hello-node-75c85bcc94-g6tqf hello-node-connect-7d85dfc575-h4vhx mysql-5bb876957f-kk5fh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4lmsb kubernetes-dashboard-855c9754f9-xmmkt: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-463278 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-463278 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-g6tqf" [868ae89c-0be5-4a7d-8bfe-31984dcc4024] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-463278 -n functional-463278
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-07 22:55:54.50182208 +0000 UTC m=+1494.654488165
functional_test.go:1460: (dbg) Run:  kubectl --context functional-463278 describe po hello-node-75c85bcc94-g6tqf -n default
functional_test.go:1460: (dbg) kubectl --context functional-463278 describe po hello-node-75c85bcc94-g6tqf -n default:
Name:             hello-node-75c85bcc94-g6tqf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-463278/192.168.49.2
Start Time:       Sun, 07 Dec 2025 22:45:54 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfpqk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tfpqk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-75c85bcc94-g6tqf to functional-463278
Warning  Failed     8m24s (x2 over 9m59s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling  6m57s (x5 over 10m)    kubelet  Pulling image "kicbase/echo-server"
Warning  Failed   6m56s (x5 over 9m59s)  kubelet  Error: ErrImagePull
Warning  Failed   6m56s (x3 over 9m45s)  kubelet  Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff  4m49s (x21 over 9m59s)  kubelet  Back-off pulling image "kicbase/echo-server"
Warning  Failed   4m49s (x21 over 9m59s)  kubelet  Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-463278 logs hello-node-75c85bcc94-g6tqf -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-463278 logs hello-node-75c85bcc94-g6tqf -n default: exit status 1 (64.295759ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-g6tqf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-463278 logs hello-node-75c85bcc94-g6tqf -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-463278 apply -f testdata/testsvc.yaml
I1207 22:46:04.380080  375863 detect.go:223] nested VM detected
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [a6ea4e44-3527-4f0c-9cae-328ed5b798c6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-463278 -n functional-463278
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-12-07 22:50:04.870528033 +0000 UTC m=+1145.023194106
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-463278 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-463278 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-463278/192.168.49.2
Start Time:       Sun, 07 Dec 2025 22:46:04 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkl8s (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-fkl8s:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-463278
Normal   Pulling    60s (x5 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     59s (x5 over 3m58s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   59s (x5 over 3m58s)  kubelet  Error: ErrImagePull
Normal   BackOff  6s (x14 over 3m57s)  kubelet  Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed   6s (x14 over 3m57s)  kubelet  Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-463278 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-463278 logs nginx-svc -n default: exit status 1 (63.525436ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-463278 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (111.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1207 22:50:04.996728  375863 retry.go:31] will retry after 1.768120756s: Temporary Error: Get "http:": http: no Host in request URL
I1207 22:50:06.765807  375863 retry.go:31] will retry after 5.959143928s: Temporary Error: Get "http:": http: no Host in request URL
I1207 22:50:12.725553  375863 retry.go:31] will retry after 5.394616141s: Temporary Error: Get "http:": http: no Host in request URL
I1207 22:50:18.121223  375863 retry.go:31] will retry after 9.615862361s: Temporary Error: Get "http:": http: no Host in request URL
I1207 22:50:27.738058  375863 retry.go:31] will retry after 21.52985249s: Temporary Error: Get "http:": http: no Host in request URL
I1207 22:50:49.268429  375863 retry.go:31] will retry after 19.904617008s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-463278 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.101.91.117   10.101.91.117   80:31484/TCP   5m52s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (111.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463278 service --namespace=default --https --url hello-node: exit status 115 (529.344491ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30139
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-463278 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463278 service hello-node --url --format={{.IP}}: exit status 115 (521.991932ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-463278 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463278 service hello-node --url: exit status 115 (524.237523ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30139
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-463278 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30139
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (3.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-372098 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-372098 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-372098 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-372098 --alsologtostderr -v=1] stderr:
I1207 23:10:02.888267  457994 out.go:360] Setting OutFile to fd 1 ...
I1207 23:10:02.888678  457994 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:10:02.888709  457994 out.go:374] Setting ErrFile to fd 2...
I1207 23:10:02.888717  457994 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:10:02.889162  457994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
I1207 23:10:02.889666  457994 mustload.go:66] Loading cluster: functional-372098
I1207 23:10:02.890088  457994 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1207 23:10:02.890492  457994 cli_runner.go:164] Run: docker container inspect functional-372098 --format={{.State.Status}}
I1207 23:10:02.908960  457994 host.go:66] Checking if "functional-372098" exists ...
I1207 23:10:02.909173  457994 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1207 23:10:02.964114  457994 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-07 23:10:02.954815957 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1207 23:10:02.964225  457994 api_server.go:166] Checking apiserver status ...
I1207 23:10:02.964266  457994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1207 23:10:02.964301  457994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
I1207 23:10:02.981208  457994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-372098/id_rsa Username:docker}
I1207 23:10:03.078076  457994 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4893/cgroup
W1207 23:10:03.086582  457994 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4893/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1207 23:10:03.086634  457994 ssh_runner.go:195] Run: ls
I1207 23:10:03.090069  457994 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1207 23:10:03.095186  457994 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1207 23:10:03.095251  457994 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1207 23:10:03.095431  457994 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1207 23:10:03.095454  457994 addons.go:70] Setting dashboard=true in profile "functional-372098"
I1207 23:10:03.095473  457994 addons.go:239] Setting addon dashboard=true in "functional-372098"
I1207 23:10:03.095519  457994 host.go:66] Checking if "functional-372098" exists ...
I1207 23:10:03.095979  457994 cli_runner.go:164] Run: docker container inspect functional-372098 --format={{.State.Status}}
I1207 23:10:03.115242  457994 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1207 23:10:03.116543  457994 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1207 23:10:03.117629  457994 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1207 23:10:03.117646  457994 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1207 23:10:03.117731  457994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
I1207 23:10:03.134820  457994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-372098/id_rsa Username:docker}
I1207 23:10:03.232550  457994 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1207 23:10:03.232582  457994 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1207 23:10:03.244779  457994 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1207 23:10:03.244803  457994 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1207 23:10:03.256850  457994 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1207 23:10:03.256874  457994 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1207 23:10:03.269600  457994 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1207 23:10:03.269618  457994 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1207 23:10:03.282014  457994 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1207 23:10:03.282031  457994 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1207 23:10:03.293722  457994 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1207 23:10:03.293739  457994 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1207 23:10:03.305811  457994 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1207 23:10:03.305829  457994 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1207 23:10:03.317486  457994 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1207 23:10:03.317506  457994 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1207 23:10:03.329492  457994 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1207 23:10:03.329512  457994 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1207 23:10:03.341519  457994 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1207 23:10:03.738278  457994 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-372098 addons enable metrics-server

                                                
                                                
I1207 23:10:03.739278  457994 addons.go:202] Writing out "functional-372098" config to set dashboard=true...
W1207 23:10:03.739480  457994 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1207 23:10:03.740115  457994 kapi.go:59] client config for functional-372098: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.key", CAFile:"/home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1207 23:10:03.740538  457994 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1207 23:10:03.740555  457994 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1207 23:10:03.740560  457994 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1207 23:10:03.740567  457994 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1207 23:10:03.740574  457994 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1207 23:10:03.747446  457994 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  cb5d26da-f86e-48f7-bec3-067745830146 1311 0 2025-12-07 23:10:03 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-07 23:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.105.212.114,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.105.212.114],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1207 23:10:03.747619  457994 out.go:285] * Launching proxy ...
* Launching proxy ...
I1207 23:10:03.747679  457994 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-372098 proxy --port 36195]
I1207 23:10:03.747936  457994 dashboard.go:159] Waiting for kubectl to output host:port ...
I1207 23:10:03.793246  457994 out.go:203] 
W1207 23:10:03.794385  457994 out.go:285] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W1207 23:10:03.794400  457994 out.go:285] * 
* 
W1207 23:10:03.798484  457994 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1207 23:10:03.799566  457994 out.go:203] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-372098
helpers_test.go:243: (dbg) docker inspect functional-372098:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0",
	        "Created": "2025-12-07T23:02:06.580313062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 439416,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:02:06.615285569Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/hostname",
	        "HostsPath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/hosts",
	        "LogPath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0-json.log",
	        "Name": "/functional-372098",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-372098:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-372098",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0",
	                "LowerDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff-init/diff:/var/lib/docker/overlay2/3cf0cc8fa8911fedbf601d360c4398569b0f30cd394076654dad88848c4dafaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-372098",
	                "Source": "/var/lib/docker/volumes/functional-372098/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-372098",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-372098",
	                "name.minikube.sigs.k8s.io": "functional-372098",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a2fa3a3be74c47da6ccc47461bbc5b06579e767397576b1af8ed9fd637c04ccf",
	            "SandboxKey": "/var/run/docker/netns/a2fa3a3be74c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-372098": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f9ed146fed112ca9fd37afbd507e6575957a4ed48d7066777c5186f06129931c",
	                    "EndpointID": "acda0ebebf0dc04b91546f7aa782adebebea917dc5ceb5777c345b196684711a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "16:b1:7a:3d:6a:a7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-372098",
	                        "24bb4d0c1a95"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-372098 -n functional-372098
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-372098 logs -n 25: (1.176919776s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-372098 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │                     │
	│ mount     │ -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3701926629/001:/mount-9p --alsologtostderr -v=1              │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │                     │
	│ ssh       │ functional-372098 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh       │ functional-372098 ssh -- ls -la /mount-9p                                                                                                           │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh       │ functional-372098 ssh cat /mount-9p/test-1765148993199053283                                                                                        │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh       │ functional-372098 ssh stat /mount-9p/created-by-test                                                                                                │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh       │ functional-372098 ssh stat /mount-9p/created-by-pod                                                                                                 │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh       │ functional-372098 ssh sudo umount -f /mount-9p                                                                                                      │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh       │ functional-372098 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │                     │
	│ mount     │ -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo181578777/001:/mount-9p --alsologtostderr -v=1 --port 46464  │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │                     │
	│ ssh       │ functional-372098 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh       │ functional-372098 ssh -- ls -la /mount-9p                                                                                                           │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh       │ functional-372098 ssh sudo umount -f /mount-9p                                                                                                      │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ ssh       │ functional-372098 ssh findmnt -T /mount1                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ mount     │ -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2517702564/001:/mount3 --alsologtostderr -v=1                │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ mount     │ -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2517702564/001:/mount2 --alsologtostderr -v=1                │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ mount     │ -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2517702564/001:/mount1 --alsologtostderr -v=1                │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ ssh       │ functional-372098 ssh findmnt -T /mount1                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ ssh       │ functional-372098 ssh findmnt -T /mount2                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ ssh       │ functional-372098 ssh findmnt -T /mount3                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ mount     │ -p functional-372098 --kill=true                                                                                                                    │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ start     │ -p functional-372098 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ start     │ -p functional-372098 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ start     │ -p functional-372098 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-372098 --alsologtostderr -v=1                                                                                      │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:10:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:10:02.675202  457901 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:10:02.675303  457901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:10:02.675311  457901 out.go:374] Setting ErrFile to fd 2...
	I1207 23:10:02.675315  457901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:10:02.675574  457901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 23:10:02.675985  457901 out.go:368] Setting JSON to false
	I1207 23:10:02.676932  457901 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6752,"bootTime":1765142251,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:10:02.676985  457901 start.go:143] virtualization: kvm guest
	I1207 23:10:02.678751  457901 out.go:179] * [functional-372098] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1207 23:10:02.679921  457901 notify.go:221] Checking for updates...
	I1207 23:10:02.679967  457901 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:10:02.681267  457901 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:10:02.682437  457901 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 23:10:02.683434  457901 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 23:10:02.684504  457901 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:10:02.685516  457901 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:10:02.687010  457901 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1207 23:10:02.687493  457901 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:10:02.711459  457901 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:10:02.711573  457901 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:10:02.764536  457901 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-07 23:10:02.755142825 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:10:02.764648  457901 docker.go:319] overlay module found
	I1207 23:10:02.766932  457901 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1207 23:10:02.767937  457901 start.go:309] selected driver: docker
	I1207 23:10:02.767949  457901 start.go:927] validating driver "docker" against &{Name:functional-372098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-372098 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:10:02.768026  457901 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:10:02.769497  457901 out.go:203] 
	W1207 23:10:02.770495  457901 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1207 23:10:02.771545  457901 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e14f9112ffcff       56cc512116c8f       9 seconds ago       Exited              mount-munger              0                   fe5ce75149fc9       busybox-mount                               default
	2ccf118549ae3       aa9d02839d8de       6 minutes ago       Running             kube-apiserver            0                   72a2bc1d4f365       kube-apiserver-functional-372098            kube-system
	ef3b33e5c5d94       45f3cc72d235f       6 minutes ago       Running             kube-controller-manager   1                   85198daee096f       kube-controller-manager-functional-372098   kube-system
	0c06ca4e4069e       a3e246e9556e9       6 minutes ago       Running             etcd                      1                   4d67e174203fb       etcd-functional-372098                      kube-system
	67503af5d77f5       7bb6219ddab95       6 minutes ago       Running             kube-scheduler            1                   0e45f2e1f526f       kube-scheduler-functional-372098            kube-system
	10581c840670f       8a4ded35a3eb1       7 minutes ago       Running             kube-proxy                1                   dea1d5f421a2c       kube-proxy-rsjbv                            kube-system
	5cf5db2404f73       409467f978b4a       7 minutes ago       Running             kindnet-cni               1                   9e6aa8abdee4b       kindnet-t6d26                               kube-system
	bf437fd6022b3       6e38f40d628db       7 minutes ago       Running             storage-provisioner       1                   f498869d5e01e       storage-provisioner                         kube-system
	d8af143545632       aa5e3ebc0dfed       7 minutes ago       Running             coredns                   1                   887f2a7260308       coredns-7d764666f9-k5tm5                    kube-system
	e4204dc06ed5f       aa5e3ebc0dfed       7 minutes ago       Exited              coredns                   0                   887f2a7260308       coredns-7d764666f9-k5tm5                    kube-system
	77c5419a5af32       6e38f40d628db       7 minutes ago       Exited              storage-provisioner       0                   f498869d5e01e       storage-provisioner                         kube-system
	b4896657616d8       409467f978b4a       7 minutes ago       Exited              kindnet-cni               0                   9e6aa8abdee4b       kindnet-t6d26                               kube-system
	91c336f27dcdb       8a4ded35a3eb1       7 minutes ago       Exited              kube-proxy                0                   dea1d5f421a2c       kube-proxy-rsjbv                            kube-system
	27c569bb3458a       45f3cc72d235f       7 minutes ago       Exited              kube-controller-manager   0                   85198daee096f       kube-controller-manager-functional-372098   kube-system
	bb6c22ba4f99b       a3e246e9556e9       7 minutes ago       Exited              etcd                      0                   4d67e174203fb       etcd-functional-372098                      kube-system
	def0377c6f0d1       7bb6219ddab95       7 minutes ago       Exited              kube-scheduler            0                   0e45f2e1f526f       kube-scheduler-functional-372098            kube-system
	
	
	==> containerd <==
	Dec 07 23:09:57 functional-372098 containerd[3768]: time="2025-12-07T23:09:57.654335054Z" level=info msg="TearDown network for sandbox \"fe5ce75149fc9a824e7ece2da7755e093f10012a6cf6efe9f1fa90a0a36e6abe\" successfully"
	Dec 07 23:09:57 functional-372098 containerd[3768]: time="2025-12-07T23:09:57.654403254Z" level=info msg="StopPodSandbox for \"fe5ce75149fc9a824e7ece2da7755e093f10012a6cf6efe9f1fa90a0a36e6abe\" returns successfully"
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.443685160Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52f5009c_b0c2_4c05_afea_472b842f9ef4.slice/cri-containerd-10581c840670f49559436a75d55800e9640286d59204ff2ba47af75f05b78f72.scope/hugetlb.2MB.events\""
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.443846328Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52f5009c_b0c2_4c05_afea_472b842f9ef4.slice/cri-containerd-10581c840670f49559436a75d55800e9640286d59204ff2ba47af75f05b78f72.scope/hugetlb.1GB.events\""
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.444677817Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804b8ebe_b61a_4547_874f_b95e8add978c.slice/cri-containerd-d8af143545632d45bdb3a5db702a65d9012e593185181be61a7118fe16665a1c.scope/hugetlb.2MB.events\""
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.444810308Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804b8ebe_b61a_4547_874f_b95e8add978c.slice/cri-containerd-d8af143545632d45bdb3a5db702a65d9012e593185181be61a7118fe16665a1c.scope/hugetlb.1GB.events\""
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.445647292Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d8879b80172ef0b147ae38507cc833.slice/cri-containerd-67503af5d77f52fbbfacae3ab1b91c97dd4f963ee89dfadeae82d59c1895e8e6.scope/hugetlb.2MB.events\""
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.445765002Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d8879b80172ef0b147ae38507cc833.slice/cri-containerd-67503af5d77f52fbbfacae3ab1b91c97dd4f963ee89dfadeae82d59c1895e8e6.scope/hugetlb.1GB.events\""
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.446463970Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf16c9eb383622fd7c3cc4026e467886d.slice/cri-containerd-0c06ca4e4069ec6b9caba6080fd1a53cf6b2f4856d57231a8ad4d6568836cee9.scope/hugetlb.2MB.events\""
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.446548406Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf16c9eb383622fd7c3cc4026e467886d.slice/cri-containerd-0c06ca4e4069ec6b9caba6080fd1a53cf6b2f4856d57231a8ad4d6568836cee9.scope/hugetlb.1GB.events\""
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.447339999Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a996c0b_b206_47a4_9c47_8f7137de1f1a.slice/cri-containerd-bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f.scope/hugetlb.2MB.events\""
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.447426468Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a996c0b_b206_47a4_9c47_8f7137de1f1a.slice/cri-containerd-bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f.scope/hugetlb.1GB.events\""
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.448154493Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1042ef16f3b4ac85da716dadc81299.slice/cri-containerd-ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640.scope/hugetlb.2MB.events\""
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.448231655Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1042ef16f3b4ac85da716dadc81299.slice/cri-containerd-ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640.scope/hugetlb.1GB.events\""
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.448895709Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod7dedf5d8_b6fd_49a9_9161_47d438f3fc4e.slice/cri-containerd-5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f.scope/hugetlb.2MB.events\""
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.449009175Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod7dedf5d8_b6fd_49a9_9161_47d438f3fc4e.slice/cri-containerd-5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f.scope/hugetlb.1GB.events\""
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.449854222Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc277aa187e9a67d3af88edb77e7859f.slice/cri-containerd-2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889.scope/hugetlb.2MB.events\""
	Dec 07 23:10:01 functional-372098 containerd[3768]: time="2025-12-07T23:10:01.449968149Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc277aa187e9a67d3af88edb77e7859f.slice/cri-containerd-2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889.scope/hugetlb.1GB.events\""
	Dec 07 23:10:04 functional-372098 containerd[3768]: time="2025-12-07T23:10:04.018049196Z" level=info msg="RunPodSandbox for name:\"kubernetes-dashboard-b84665fb8-nghff\"  uid:\"7558f757-a03e-4c8d-a71a-92ce283b4c06\"  namespace:\"kubernetes-dashboard\""
	Dec 07 23:10:04 functional-372098 containerd[3768]: time="2025-12-07T23:10:04.023885494Z" level=info msg="RunPodSandbox for name:\"dashboard-metrics-scraper-5565989548-zbt2f\"  uid:\"b536e949-6d38-400b-bece-d762f111be94\"  namespace:\"kubernetes-dashboard\""
	Dec 07 23:10:04 functional-372098 containerd[3768]: time="2025-12-07T23:10:04.054974758Z" level=info msg="connecting to shim d18746176f036b1613eee0e69e471c8667d05636432624464a65f9c596a0b2b6" address="unix:///run/containerd/s/4428e9467b6448e477809be687b944ac5cbf530136401117dd484df7f541f620" namespace=k8s.io protocol=ttrpc version=3
	Dec 07 23:10:04 functional-372098 containerd[3768]: time="2025-12-07T23:10:04.059530039Z" level=info msg="connecting to shim 8e01ac9c82b0e074edfadefd973322638ccdcb612fb3125e392e1abd2591ef35" address="unix:///run/containerd/s/7e99ed2e39ee0527b5622416a2bff01be4cb3751be6f97c844ebd3a45d9c1a92" namespace=k8s.io protocol=ttrpc version=3
	Dec 07 23:10:04 functional-372098 containerd[3768]: time="2025-12-07T23:10:04.123041302Z" level=info msg="RunPodSandbox for name:\"kubernetes-dashboard-b84665fb8-nghff\"  uid:\"7558f757-a03e-4c8d-a71a-92ce283b4c06\"  namespace:\"kubernetes-dashboard\" returns sandbox id \"d18746176f036b1613eee0e69e471c8667d05636432624464a65f9c596a0b2b6\""
	Dec 07 23:10:04 functional-372098 containerd[3768]: time="2025-12-07T23:10:04.124502023Z" level=info msg="RunPodSandbox for name:\"dashboard-metrics-scraper-5565989548-zbt2f\"  uid:\"b536e949-6d38-400b-bece-d762f111be94\"  namespace:\"kubernetes-dashboard\" returns sandbox id \"8e01ac9c82b0e074edfadefd973322638ccdcb612fb3125e392e1abd2591ef35\""
	Dec 07 23:10:04 functional-372098 containerd[3768]: time="2025-12-07T23:10:04.125736013Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	
	
	==> coredns [d8af143545632d45bdb3a5db702a65d9012e593185181be61a7118fe16665a1c] <==
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:42358 - 39813 "HINFO IN 2893170015885620547.3879998501473278557. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.879961652s
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> coredns [e4204dc06ed5fc87b4fd716a7b639fe6349d52dc60d8e9c331ac181704dac256] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44361 - 37708 "HINFO IN 8252574301961499388.8195395475701870576. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.085806688s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-372098
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-372098
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=functional-372098
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_02_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:02:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-372098
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:10:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:08:38 +0000   Sun, 07 Dec 2025 23:02:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:08:38 +0000   Sun, 07 Dec 2025 23:02:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:08:38 +0000   Sun, 07 Dec 2025 23:02:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:08:38 +0000   Sun, 07 Dec 2025 23:02:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-372098
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                2f15a4e4-5557-42d7-a12d-8f37ddc06bdc
	  Boot ID:                    60b18ead-4d86-44c7-8f9b-425eebb82278
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-h9ctn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  default                     hello-node-connect-9f67c86d4-rt6h6            0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  default                     mysql-844cf969f6-swxtx                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     6m26s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 coredns-7d764666f9-k5tm5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m42s
	  kube-system                 etcd-functional-372098                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m47s
	  kube-system                 kindnet-t6d26                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m42s
	  kube-system                 kube-apiserver-functional-372098              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m51s
	  kube-system                 kube-controller-manager-functional-372098     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m47s
	  kube-system                 kube-proxy-rsjbv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 kube-scheduler-functional-372098              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m47s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m42s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-zbt2f    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-nghff          0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  7m43s  node-controller  Node functional-372098 event: Registered Node functional-372098 in Controller
	  Normal  RegisteredNode  6m48s  node-controller  Node functional-372098 event: Registered Node functional-372098 in Controller
	
	
	==> dmesg <==
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 21:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 9d 47 68 73 f8 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 51 85 40 99 2c 08 06
	[Dec 7 22:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +0.003775] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 fe e4 4c 66 38 08 06
	[ +24.208165] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	[Dec 7 22:26] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 26 10 e0 03 10 08 06
	[  +0.000351] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +6.093825] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 95 3b ef b3 e0 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e ac 25 ad 6e 00 08 06
	[ +21.488023] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e e7 41 9a 21 38 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	
	
	==> etcd [0c06ca4e4069ec6b9caba6080fd1a53cf6b2f4856d57231a8ad4d6568836cee9] <==
	{"level":"warn","ts":"2025-12-07T23:03:12.237829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.243608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.249882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.258230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.264336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.270498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.277120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.291838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.298227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.304563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.312856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.318729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.325848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.331520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.344456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.351231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.357336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.363314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.369310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.375148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.388450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.395023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.401166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.407249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.457756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59714","server-name":"","error":"EOF"}
	
	
	==> etcd [bb6c22ba4f99b3a5e0ad4f225337ef6d9c678ad8f673d0daa3576f8291e9becb] <==
	{"level":"warn","ts":"2025-12-07T23:02:14.265025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.271024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.288678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.294504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.300407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.306862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.353907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55492","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T23:03:09.553511Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-07T23:03:09.553622Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-372098","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-07T23:03:09.553773Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T23:03:09.555329Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T23:03:09.555392Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T23:03:09.555422Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-07T23:03:09.555453Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-07T23:03:09.555451Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555458Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555508Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-07T23:03:09.555518Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555498Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555542Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-07T23:03:09.555560Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T23:03:09.557307Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-07T23:03:09.557363Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T23:03:09.557392Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-07T23:03:09.557400Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-372098","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 23:10:05 up  1:52,  0 user,  load average: 0.28, 0.23, 0.43
	Linux functional-372098 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f] <==
	I1207 23:08:00.813327       1 main.go:301] handling current node
	I1207 23:08:10.811821       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:08:10.811872       1 main.go:301] handling current node
	I1207 23:08:20.808359       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:08:20.808417       1 main.go:301] handling current node
	I1207 23:08:30.812278       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:08:30.812320       1 main.go:301] handling current node
	I1207 23:08:40.807748       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:08:40.807784       1 main.go:301] handling current node
	I1207 23:08:50.808067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:08:50.808098       1 main.go:301] handling current node
	I1207 23:09:00.807310       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:09:00.807342       1 main.go:301] handling current node
	I1207 23:09:10.815087       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:09:10.815117       1 main.go:301] handling current node
	I1207 23:09:20.807329       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:09:20.807360       1 main.go:301] handling current node
	I1207 23:09:30.807557       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:09:30.807588       1 main.go:301] handling current node
	I1207 23:09:40.806779       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:09:40.806834       1 main.go:301] handling current node
	I1207 23:09:50.806871       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:09:50.806926       1 main.go:301] handling current node
	I1207 23:10:00.808809       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:10:00.808845       1 main.go:301] handling current node
	
	
	==> kindnet [b4896657616d8a5bf9cd25346f14a5f0022fe77640a87ffd0ad1d235c1b32b21] <==
	I1207 23:02:23.611610       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:02:23.611876       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1207 23:02:23.612007       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:02:23.612023       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:02:23.612042       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:02:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:02:23.812318       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:02:23.812611       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:02:23.812779       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:02:23.813113       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:02:24.205586       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:02:24.205618       1 metrics.go:72] Registering metrics
	I1207 23:02:24.205710       1 controller.go:711] "Syncing nftables rules"
	I1207 23:02:33.816182       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:02:33.816279       1 main.go:301] handling current node
	I1207 23:02:43.816778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:02:43.816822       1 main.go:301] handling current node
	I1207 23:02:53.814190       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:02:53.814234       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889] <==
	I1207 23:03:12.895419       1 aggregator.go:187] initial CRD sync complete...
	I1207 23:03:12.895428       1 autoregister_controller.go:144] Starting autoregister controller
	I1207 23:03:12.895433       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 23:03:12.895439       1 cache.go:39] Caches are synced for autoregister controller
	I1207 23:03:12.900034       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 23:03:12.916725       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:03:12.917953       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:03:13.621946       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:03:13.798835       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1207 23:03:14.002955       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1207 23:03:14.004054       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:03:14.008020       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:03:14.479237       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 23:03:14.565406       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:03:14.609604       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:03:14.614746       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:03:26.657356       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:03:32.687174       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.66.15"}
	I1207 23:03:38.179099       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.192.100"}
	I1207 23:03:38.781988       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.9.0"}
	I1207 23:03:39.826372       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.28.61"}
	I1207 23:09:18.560342       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.155.3"}
	I1207 23:10:03.625772       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 23:10:03.720562       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.212.114"}
	I1207 23:10:03.731953       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.202.202"}
	
	
	==> kube-controller-manager [27c569bb3458ad14783a895d6a725d0680973c56f076d8afc2b12298f8f95ba4] <==
	I1207 23:02:21.521818       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522069       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522103       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522229       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522340       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522614       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522650       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522682       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522732       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522819       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.523110       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.523382       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.523713       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.524650       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.524812       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.524910       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.525007       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.526069       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:02:21.528785       1 range_allocator.go:433] "Set node PodCIDR" node="functional-372098" podCIDRs=["10.244.0.0/24"]
	I1207 23:02:21.535609       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.622638       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.622653       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 23:02:21.622657       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 23:02:21.627127       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:36.682963       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-controller-manager [ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640] <==
	I1207 23:03:16.031183       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:03:16.031625       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.031943       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.032375       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033656       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033712       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033727       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033905       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033735       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034026       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034591       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034672       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034741       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.036367       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.038000       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.128372       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.128388       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 23:03:16.128393       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 23:03:16.132205       1 shared_informer.go:377] "Caches are synced"
	E1207 23:10:03.668806       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 23:10:03.672253       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 23:10:03.675555       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 23:10:03.680369       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 23:10:03.680408       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 23:10:03.684339       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [10581c840670f49559436a75d55800e9640286d59204ff2ba47af75f05b78f72] <==
	I1207 23:03:00.432540       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:03:00.497444       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:03:20.097782       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:20.097817       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 23:03:20.097903       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:03:20.118707       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:03:20.118766       1 server_linux.go:136] "Using iptables Proxier"
	I1207 23:03:20.124259       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:03:20.124567       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 23:03:20.124584       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:03:20.126112       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:03:20.126122       1 config.go:200] "Starting service config controller"
	I1207 23:03:20.126148       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:03:20.126133       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:03:20.126225       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:03:20.126240       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:03:20.126247       1 config.go:309] "Starting node config controller"
	I1207 23:03:20.126584       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:03:20.126592       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:03:20.226321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 23:03:20.226350       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:03:20.226821       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [91c336f27dcdb0f7a126951cd6bcba9bcba4bc625d39ad62a68d1683a1d4327a] <==
	I1207 23:02:23.207160       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:02:23.278330       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:02:23.378882       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:23.378921       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 23:02:23.379062       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:02:23.398024       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:02:23.398074       1 server_linux.go:136] "Using iptables Proxier"
	I1207 23:02:23.403163       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:02:23.403501       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 23:02:23.403527       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:02:23.405445       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:02:23.405465       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:02:23.405483       1 config.go:200] "Starting service config controller"
	I1207 23:02:23.405488       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:02:23.405499       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:02:23.405503       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:02:23.405547       1 config.go:309] "Starting node config controller"
	I1207 23:02:23.405552       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:02:23.505835       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:02:23.505858       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:02:23.505870       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 23:02:23.505883       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [67503af5d77f52fbbfacae3ab1b91c97dd4f963ee89dfadeae82d59c1895e8e6] <==
	I1207 23:03:11.407436       1 serving.go:386] Generated self-signed cert in-memory
	W1207 23:03:12.809581       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 23:03:12.809621       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 23:03:12.809633       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 23:03:12.809642       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 23:03:12.836375       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 23:03:12.836486       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:03:12.840380       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:03:12.840479       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:03:12.840534       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 23:03:12.840628       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 23:03:12.941464       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [def0377c6f0d18e86ee38cf410ad1b34f32ba4994e7e4e73c8c80acfcd0f81fd] <==
	E1207 23:02:15.676375       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:02:15.677255       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1207 23:02:15.702232       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1207 23:02:15.703102       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1207 23:02:15.772992       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:02:15.773938       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1207 23:02:15.776967       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1207 23:02:15.777880       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1207 23:02:15.808060       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1207 23:02:15.808975       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1207 23:02:15.828950       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1207 23:02:15.829754       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1207 23:02:15.847911       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1207 23:02:15.850936       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1207 23:02:15.898865       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:02:15.899689       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1207 23:02:15.951270       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1207 23:02:15.952209       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	I1207 23:02:17.937339       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:09.602953       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1207 23:03:09.603068       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1207 23:03:09.603098       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1207 23:03:09.603105       1 server.go:265] "[graceful-termination] secure server is exiting"
	I1207 23:03:09.603097       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1207 23:03:09.603128       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 07 23:09:57 functional-372098 kubelet[4673]: I1207 23:09:57.770685    4673 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/8f6758b3-49ad-4564-b784-73ddc4f61381-test-volume\" (UniqueName: \"kubernetes.io/host-path/8f6758b3-49ad-4564-b784-73ddc4f61381-test-volume\") pod \"8f6758b3-49ad-4564-b784-73ddc4f61381\" (UID: \"8f6758b3-49ad-4564-b784-73ddc4f61381\") "
	Dec 07 23:09:57 functional-372098 kubelet[4673]: I1207 23:09:57.770806    4673 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f6758b3-49ad-4564-b784-73ddc4f61381-test-volume" pod "8f6758b3-49ad-4564-b784-73ddc4f61381" (UID: "8f6758b3-49ad-4564-b784-73ddc4f61381"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 07 23:09:57 functional-372098 kubelet[4673]: I1207 23:09:57.772746    4673 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6758b3-49ad-4564-b784-73ddc4f61381-kube-api-access-zfsq7" pod "8f6758b3-49ad-4564-b784-73ddc4f61381" (UID: "8f6758b3-49ad-4564-b784-73ddc4f61381"). InnerVolumeSpecName "kube-api-access-zfsq7". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 07 23:09:57 functional-372098 kubelet[4673]: I1207 23:09:57.871966    4673 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zfsq7\" (UniqueName: \"kubernetes.io/projected/8f6758b3-49ad-4564-b784-73ddc4f61381-kube-api-access-zfsq7\") on node \"functional-372098\" DevicePath \"\""
	Dec 07 23:09:57 functional-372098 kubelet[4673]: I1207 23:09:57.871995    4673 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/8f6758b3-49ad-4564-b784-73ddc4f61381-test-volume\") on node \"functional-372098\" DevicePath \"\""
	Dec 07 23:09:58 functional-372098 kubelet[4673]: I1207 23:09:58.592839    4673 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe5ce75149fc9a824e7ece2da7755e093f10012a6cf6efe9f1fa90a0a36e6abe"
	Dec 07 23:10:03 functional-372098 kubelet[4673]: I1207 23:10:03.807740    4673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7558f757-a03e-4c8d-a71a-92ce283b4c06-tmp-volume\") pod \"kubernetes-dashboard-b84665fb8-nghff\" (UID: \"7558f757-a03e-4c8d-a71a-92ce283b4c06\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nghff"
	Dec 07 23:10:03 functional-372098 kubelet[4673]: I1207 23:10:03.807791    4673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chkmv\" (UniqueName: \"kubernetes.io/projected/b536e949-6d38-400b-bece-d762f111be94-kube-api-access-chkmv\") pod \"dashboard-metrics-scraper-5565989548-zbt2f\" (UID: \"b536e949-6d38-400b-bece-d762f111be94\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-zbt2f"
	Dec 07 23:10:03 functional-372098 kubelet[4673]: I1207 23:10:03.807821    4673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b536e949-6d38-400b-bece-d762f111be94-tmp-volume\") pod \"dashboard-metrics-scraper-5565989548-zbt2f\" (UID: \"b536e949-6d38-400b-bece-d762f111be94\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-zbt2f"
	Dec 07 23:10:03 functional-372098 kubelet[4673]: I1207 23:10:03.807847    4673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhhrj\" (UniqueName: \"kubernetes.io/projected/7558f757-a03e-4c8d-a71a-92ce283b4c06-kube-api-access-mhhrj\") pod \"kubernetes-dashboard-b84665fb8-nghff\" (UID: \"7558f757-a03e-4c8d-a71a-92ce283b4c06\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nghff"
	Dec 07 23:10:04 functional-372098 kubelet[4673]: E1207 23:10:04.635615    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-h9ctn" podUID="407fccce-e612-4fd6-9dea-5557b2a23bcb"
	Dec 07 23:10:04 functional-372098 kubelet[4673]: E1207 23:10:04.636336    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="07af374e-0a50-41d2-a6ec-bc5002beba0e"
	Dec 07 23:10:05 functional-372098 kubelet[4673]: E1207 23:10:05.065581    4673 log.go:32] "PullImage from image service failed" err=<
	Dec 07 23:10:05 functional-372098 kubelet[4673]:         rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests
	Dec 07 23:10:05 functional-372098 kubelet[4673]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 23:10:05 functional-372098 kubelet[4673]:  > image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 23:10:05 functional-372098 kubelet[4673]: E1207 23:10:05.065639    4673 kuberuntime_image.go:43] "Failed to pull image" err=<
	Dec 07 23:10:05 functional-372098 kubelet[4673]:         failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests
	Dec 07 23:10:05 functional-372098 kubelet[4673]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 23:10:05 functional-372098 kubelet[4673]:  > image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 23:10:05 functional-372098 kubelet[4673]: E1207 23:10:05.065977    4673 kuberuntime_manager.go:1664] "Unhandled Error" err=<
	Dec 07 23:10:05 functional-372098 kubelet[4673]:         container kubernetes-dashboard start failed in pod kubernetes-dashboard-b84665fb8-nghff_kubernetes-dashboard(7558f757-a03e-4c8d-a71a-92ce283b4c06): ErrImagePull: failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests
	Dec 07 23:10:05 functional-372098 kubelet[4673]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 23:10:05 functional-372098 kubelet[4673]:  > logger="UnhandledError"
	Dec 07 23:10:05 functional-372098 kubelet[4673]: E1207 23:10:05.066025    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nghff" podUID="7558f757-a03e-4c8d-a71a-92ce283b4c06"
	
	
	==> storage-provisioner [77c5419a5af32a3adbfdb08f245cbdfbee4d4594dda757a4a341591f0f5757ef] <==
	I1207 23:02:34.473761       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-372098_2723da14-e3ab-463a-b591-bd211f668a0a!
	W1207 23:02:36.381294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:36.384835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:38.387295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:38.390656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:40.394077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:40.397975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:42.400909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:42.405558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:44.409032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:44.412633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:46.416254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:46.420950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:48.424007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:48.428978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:50.431882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:50.436004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:52.438706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:52.442377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:54.445344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:54.450611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:56.453177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:56.457032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:58.459733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:58.463288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f] <==
	W1207 23:09:40.089997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:42.093542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:42.097152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:44.099716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:44.103270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:46.106290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:46.109845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:48.112427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:48.116996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:50.120425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:50.124789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:52.128025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:52.142885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:54.145636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:54.149627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:56.152598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:56.156384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:58.160143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:58.164259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:10:00.167771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:10:00.172773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:10:02.177040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:10:02.181206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:10:04.184527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:10:04.190312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-372098 -n functional-372098
helpers_test.go:269: (dbg) Run:  kubectl --context functional-372098 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-h9ctn hello-node-connect-9f67c86d4-rt6h6 mysql-844cf969f6-swxtx nginx-svc sp-pod dashboard-metrics-scraper-5565989548-zbt2f kubernetes-dashboard-b84665fb8-nghff
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-372098 describe pod busybox-mount hello-node-5758569b79-h9ctn hello-node-connect-9f67c86d4-rt6h6 mysql-844cf969f6-swxtx nginx-svc sp-pod dashboard-metrics-scraper-5565989548-zbt2f kubernetes-dashboard-b84665fb8-nghff
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-372098 describe pod busybox-mount hello-node-5758569b79-h9ctn hello-node-connect-9f67c86d4-rt6h6 mysql-844cf969f6-swxtx nginx-svc sp-pod dashboard-metrics-scraper-5565989548-zbt2f kubernetes-dashboard-b84665fb8-nghff: exit status 1 (93.285827ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:09:54 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://e14f9112ffcff7df1cd79e835cdc327efffd87d59175b498d92b80986079fea1
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 07 Dec 2025 23:09:55 +0000
	      Finished:     Sun, 07 Dec 2025 23:09:55 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zfsq7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zfsq7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  11s   default-scheduler  Successfully assigned default/busybox-mount to functional-372098
	  Normal  Pulling    10s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 625ms (625ms including waiting). Image size: 2395207 bytes.
	  Normal  Created    10s   kubelet            Container created
	  Normal  Started    10s   kubelet            Container started
	
	
	Name:             hello-node-5758569b79-h9ctn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:38 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bq8h8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bq8h8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m27s                  default-scheduler  Successfully assigned default/hello-node-5758569b79-h9ctn to functional-372098
	  Normal   Pulling    3m20s (x5 over 6m26s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m18s (x5 over 6m25s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   3m18s (x5 over 6m25s)  kubelet  Error: ErrImagePull
	  Warning  Failed   79s (x20 over 6m25s)   kubelet  Error: ImagePullBackOff
	  Normal   BackOff  66s (x21 over 6m25s)   kubelet  Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-rt6h6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:09:18 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tsmdx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tsmdx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  47s                default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-rt6h6 to functional-372098
	  Normal   BackOff    20s (x2 over 45s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     20s (x2 over 45s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    9s (x3 over 47s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     8s (x3 over 46s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed  8s (x3 over 46s)  kubelet  Error: ErrImagePull
	
	
	Name:             mysql-844cf969f6-swxtx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:38 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2n2f6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2n2f6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m27s                 default-scheduler  Successfully assigned default/mysql-844cf969f6-swxtx to functional-372098
	  Warning  Failed     5m2s (x3 over 6m11s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  3m40s (x5 over 6m27s)  kubelet  Pulling image "docker.io/mysql:5.7"
	  Warning  Failed   3m39s (x2 over 6m26s)  kubelet  Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   3m39s (x5 over 6m26s)  kubelet  Error: ErrImagePull
	  Warning  Failed   74s (x20 over 6m25s)   kubelet  Error: ImagePullBackOff
	  Normal   BackOff  63s (x21 over 6m25s)   kubelet  Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:39 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8cjdf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8cjdf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m26s                  default-scheduler  Successfully assigned default/nginx-svc to functional-372098
	  Normal   Pulling    3m38s (x5 over 6m25s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m37s (x5 over 6m24s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   3m37s (x5 over 6m24s)  kubelet  Error: ErrImagePull
	  Warning  Failed   79s (x20 over 6m24s)   kubelet  Error: ImagePullBackOff
	  Normal   BackOff  67s (x21 over 6m24s)   kubelet  Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:48 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b56ct (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-b56ct:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m17s                  default-scheduler  Successfully assigned default/sp-pod to functional-372098
	  Normal   Pulling    3m20s (x5 over 6m16s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m19s (x5 over 6m15s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   3m19s (x5 over 6m15s)  kubelet  Error: ErrImagePull
	  Normal   BackOff  71s (x21 over 6m15s)   kubelet  Back-off pulling image "docker.io/nginx"
	  Warning  Failed   71s (x21 over 6m15s)   kubelet  Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-zbt2f" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-nghff" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-372098 describe pod busybox-mount hello-node-5758569b79-h9ctn hello-node-connect-9f67c86d4-rt6h6 mysql-844cf969f6-swxtx nginx-svc sp-pod dashboard-metrics-scraper-5565989548-zbt2f kubernetes-dashboard-b84665fb8-nghff: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (3.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-372098 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-372098 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-rt6h6" [4120483f-8517-4e8a-87e9-d3efcbee6356] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-372098 -n functional-372098
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-07 23:19:18.883765354 +0000 UTC m=+2899.036431428
functional_test.go:1645: (dbg) Run:  kubectl --context functional-372098 describe po hello-node-connect-9f67c86d4-rt6h6 -n default
functional_test.go:1645: (dbg) kubectl --context functional-372098 describe po hello-node-connect-9f67c86d4-rt6h6 -n default:
Name:             hello-node-connect-9f67c86d4-rt6h6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-372098/192.168.49.2
Start Time:       Sun, 07 Dec 2025 23:09:18 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tsmdx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tsmdx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-rt6h6 to functional-372098
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m3s (x5 over 9m59s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   7m3s (x5 over 9m59s)    kubelet  Error: ErrImagePull
Warning  Failed   4m47s (x20 over 9m58s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m33s (x21 over 9m58s)  kubelet  Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-372098 logs hello-node-connect-9f67c86d4-rt6h6 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-372098 logs hello-node-connect-9f67c86d4-rt6h6 -n default: exit status 1 (65.597788ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-rt6h6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-372098 logs hello-node-connect-9f67c86d4-rt6h6 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-372098 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-rt6h6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-372098/192.168.49.2
Start Time:       Sun, 07 Dec 2025 23:09:18 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tsmdx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tsmdx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-rt6h6 to functional-372098
Normal   Pulling    7m5s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   7m4s (x5 over 10m)      kubelet  Error: ErrImagePull
Warning  Failed   4m48s (x20 over 9m59s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m34s (x21 over 9m59s)  kubelet  Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-372098 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-372098 logs -l app=hello-node-connect: exit status 1 (60.867108ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-rt6h6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-372098 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-372098 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.155.3
IPs:                      10.110.155.3
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32524/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-372098
helpers_test.go:243: (dbg) docker inspect functional-372098:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0",
	        "Created": "2025-12-07T23:02:06.580313062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 439416,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:02:06.615285569Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/hostname",
	        "HostsPath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/hosts",
	        "LogPath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0-json.log",
	        "Name": "/functional-372098",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-372098:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-372098",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0",
	                "LowerDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff-init/diff:/var/lib/docker/overlay2/3cf0cc8fa8911fedbf601d360c4398569b0f30cd394076654dad88848c4dafaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-372098",
	                "Source": "/var/lib/docker/volumes/functional-372098/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-372098",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-372098",
	                "name.minikube.sigs.k8s.io": "functional-372098",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a2fa3a3be74c47da6ccc47461bbc5b06579e767397576b1af8ed9fd637c04ccf",
	            "SandboxKey": "/var/run/docker/netns/a2fa3a3be74c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-372098": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f9ed146fed112ca9fd37afbd507e6575957a4ed48d7066777c5186f06129931c",
	                    "EndpointID": "acda0ebebf0dc04b91546f7aa782adebebea917dc5ceb5777c345b196684711a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "16:b1:7a:3d:6a:a7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-372098",
	                        "24bb4d0c1a95"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-372098 -n functional-372098
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-372098 logs -n 25: (1.168758276s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2517702564/001:/mount2 --alsologtostderr -v=1                │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ mount          │ -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2517702564/001:/mount1 --alsologtostderr -v=1                │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ ssh            │ functional-372098 ssh findmnt -T /mount1                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ ssh            │ functional-372098 ssh findmnt -T /mount2                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ ssh            │ functional-372098 ssh findmnt -T /mount3                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ mount          │ -p functional-372098 --kill=true                                                                                                                    │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ start          │ -p functional-372098 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ start          │ -p functional-372098 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ start          │ -p functional-372098 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-372098 --alsologtostderr -v=1                                                                                      │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ update-context │ functional-372098 update-context --alsologtostderr -v=2                                                                                             │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ update-context │ functional-372098 update-context --alsologtostderr -v=2                                                                                             │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ update-context │ functional-372098 update-context --alsologtostderr -v=2                                                                                             │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ image          │ functional-372098 image ls --format short --alsologtostderr                                                                                         │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ image          │ functional-372098 image ls --format yaml --alsologtostderr                                                                                          │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ ssh            │ functional-372098 ssh pgrep buildkitd                                                                                                               │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ image          │ functional-372098 image build -t localhost/my-image:functional-372098 testdata/build --alsologtostderr                                              │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ image          │ functional-372098 image ls                                                                                                                          │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ image          │ functional-372098 image ls --format json --alsologtostderr                                                                                          │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ image          │ functional-372098 image ls --format table --alsologtostderr                                                                                         │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ service        │ functional-372098 service list                                                                                                                      │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:13 UTC │ 07 Dec 25 23:13 UTC │
	│ service        │ functional-372098 service list -o json                                                                                                              │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:13 UTC │ 07 Dec 25 23:13 UTC │
	│ service        │ functional-372098 service --namespace=default --https --url hello-node                                                                              │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:13 UTC │                     │
	│ service        │ functional-372098 service hello-node --url --format={{.IP}}                                                                                         │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:13 UTC │                     │
	│ service        │ functional-372098 service hello-node --url                                                                                                          │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:13 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:10:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:10:02.675202  457901 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:10:02.675303  457901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:10:02.675311  457901 out.go:374] Setting ErrFile to fd 2...
	I1207 23:10:02.675315  457901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:10:02.675574  457901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 23:10:02.675985  457901 out.go:368] Setting JSON to false
	I1207 23:10:02.676932  457901 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6752,"bootTime":1765142251,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:10:02.676985  457901 start.go:143] virtualization: kvm guest
	I1207 23:10:02.678751  457901 out.go:179] * [functional-372098] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1207 23:10:02.679921  457901 notify.go:221] Checking for updates...
	I1207 23:10:02.679967  457901 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:10:02.681267  457901 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:10:02.682437  457901 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 23:10:02.683434  457901 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 23:10:02.684504  457901 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:10:02.685516  457901 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:10:02.687010  457901 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1207 23:10:02.687493  457901 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:10:02.711459  457901 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:10:02.711573  457901 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:10:02.764536  457901 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-07 23:10:02.755142825 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:10:02.764648  457901 docker.go:319] overlay module found
	I1207 23:10:02.766932  457901 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1207 23:10:02.767937  457901 start.go:309] selected driver: docker
	I1207 23:10:02.767949  457901 start.go:927] validating driver "docker" against &{Name:functional-372098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-372098 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:10:02.768026  457901 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:10:02.769497  457901 out.go:203] 
	W1207 23:10:02.770495  457901 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1207 23:10:02.771545  457901 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e14f9112ffcff       56cc512116c8f       9 minutes ago       Exited              mount-munger              0                   fe5ce75149fc9       busybox-mount                               default
	2ccf118549ae3       aa9d02839d8de       16 minutes ago      Running             kube-apiserver            0                   72a2bc1d4f365       kube-apiserver-functional-372098            kube-system
	ef3b33e5c5d94       45f3cc72d235f       16 minutes ago      Running             kube-controller-manager   1                   85198daee096f       kube-controller-manager-functional-372098   kube-system
	0c06ca4e4069e       a3e246e9556e9       16 minutes ago      Running             etcd                      1                   4d67e174203fb       etcd-functional-372098                      kube-system
	67503af5d77f5       7bb6219ddab95       16 minutes ago      Running             kube-scheduler            1                   0e45f2e1f526f       kube-scheduler-functional-372098            kube-system
	10581c840670f       8a4ded35a3eb1       16 minutes ago      Running             kube-proxy                1                   dea1d5f421a2c       kube-proxy-rsjbv                            kube-system
	5cf5db2404f73       409467f978b4a       16 minutes ago      Running             kindnet-cni               1                   9e6aa8abdee4b       kindnet-t6d26                               kube-system
	bf437fd6022b3       6e38f40d628db       16 minutes ago      Running             storage-provisioner       1                   f498869d5e01e       storage-provisioner                         kube-system
	d8af143545632       aa5e3ebc0dfed       16 minutes ago      Running             coredns                   1                   887f2a7260308       coredns-7d764666f9-k5tm5                    kube-system
	e4204dc06ed5f       aa5e3ebc0dfed       16 minutes ago      Exited              coredns                   0                   887f2a7260308       coredns-7d764666f9-k5tm5                    kube-system
	77c5419a5af32       6e38f40d628db       16 minutes ago      Exited              storage-provisioner       0                   f498869d5e01e       storage-provisioner                         kube-system
	b4896657616d8       409467f978b4a       16 minutes ago      Exited              kindnet-cni               0                   9e6aa8abdee4b       kindnet-t6d26                               kube-system
	91c336f27dcdb       8a4ded35a3eb1       16 minutes ago      Exited              kube-proxy                0                   dea1d5f421a2c       kube-proxy-rsjbv                            kube-system
	27c569bb3458a       45f3cc72d235f       17 minutes ago      Exited              kube-controller-manager   0                   85198daee096f       kube-controller-manager-functional-372098   kube-system
	bb6c22ba4f99b       a3e246e9556e9       17 minutes ago      Exited              etcd                      0                   4d67e174203fb       etcd-functional-372098                      kube-system
	def0377c6f0d1       7bb6219ddab95       17 minutes ago      Exited              kube-scheduler            0                   0e45f2e1f526f       kube-scheduler-functional-372098            kube-system
	
	
	==> containerd <==
	Dec 07 23:19:02 functional-372098 containerd[3768]: time="2025-12-07T23:19:02.525080693Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf16c9eb383622fd7c3cc4026e467886d.slice/cri-containerd-0c06ca4e4069ec6b9caba6080fd1a53cf6b2f4856d57231a8ad4d6568836cee9.scope/hugetlb.1GB.events\""
	Dec 07 23:19:02 functional-372098 containerd[3768]: time="2025-12-07T23:19:02.525787136Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a996c0b_b206_47a4_9c47_8f7137de1f1a.slice/cri-containerd-bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f.scope/hugetlb.2MB.events\""
	Dec 07 23:19:02 functional-372098 containerd[3768]: time="2025-12-07T23:19:02.525879496Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a996c0b_b206_47a4_9c47_8f7137de1f1a.slice/cri-containerd-bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f.scope/hugetlb.1GB.events\""
	Dec 07 23:19:02 functional-372098 containerd[3768]: time="2025-12-07T23:19:02.526482537Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1042ef16f3b4ac85da716dadc81299.slice/cri-containerd-ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640.scope/hugetlb.2MB.events\""
	Dec 07 23:19:02 functional-372098 containerd[3768]: time="2025-12-07T23:19:02.526584943Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1042ef16f3b4ac85da716dadc81299.slice/cri-containerd-ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640.scope/hugetlb.1GB.events\""
	Dec 07 23:19:02 functional-372098 containerd[3768]: time="2025-12-07T23:19:02.527224492Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod7dedf5d8_b6fd_49a9_9161_47d438f3fc4e.slice/cri-containerd-5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f.scope/hugetlb.2MB.events\""
	Dec 07 23:19:02 functional-372098 containerd[3768]: time="2025-12-07T23:19:02.527345522Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod7dedf5d8_b6fd_49a9_9161_47d438f3fc4e.slice/cri-containerd-5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f.scope/hugetlb.1GB.events\""
	Dec 07 23:19:02 functional-372098 containerd[3768]: time="2025-12-07T23:19:02.528055898Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc277aa187e9a67d3af88edb77e7859f.slice/cri-containerd-2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889.scope/hugetlb.2MB.events\""
	Dec 07 23:19:02 functional-372098 containerd[3768]: time="2025-12-07T23:19:02.528129148Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc277aa187e9a67d3af88edb77e7859f.slice/cri-containerd-2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889.scope/hugetlb.1GB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.541752925Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc277aa187e9a67d3af88edb77e7859f.slice/cri-containerd-2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889.scope/hugetlb.2MB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.541852892Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc277aa187e9a67d3af88edb77e7859f.slice/cri-containerd-2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889.scope/hugetlb.1GB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.542587100Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52f5009c_b0c2_4c05_afea_472b842f9ef4.slice/cri-containerd-10581c840670f49559436a75d55800e9640286d59204ff2ba47af75f05b78f72.scope/hugetlb.2MB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.542686470Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52f5009c_b0c2_4c05_afea_472b842f9ef4.slice/cri-containerd-10581c840670f49559436a75d55800e9640286d59204ff2ba47af75f05b78f72.scope/hugetlb.1GB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.543361805Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804b8ebe_b61a_4547_874f_b95e8add978c.slice/cri-containerd-d8af143545632d45bdb3a5db702a65d9012e593185181be61a7118fe16665a1c.scope/hugetlb.2MB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.543453680Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804b8ebe_b61a_4547_874f_b95e8add978c.slice/cri-containerd-d8af143545632d45bdb3a5db702a65d9012e593185181be61a7118fe16665a1c.scope/hugetlb.1GB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.544443827Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d8879b80172ef0b147ae38507cc833.slice/cri-containerd-67503af5d77f52fbbfacae3ab1b91c97dd4f963ee89dfadeae82d59c1895e8e6.scope/hugetlb.2MB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.544550390Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d8879b80172ef0b147ae38507cc833.slice/cri-containerd-67503af5d77f52fbbfacae3ab1b91c97dd4f963ee89dfadeae82d59c1895e8e6.scope/hugetlb.1GB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.545420006Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf16c9eb383622fd7c3cc4026e467886d.slice/cri-containerd-0c06ca4e4069ec6b9caba6080fd1a53cf6b2f4856d57231a8ad4d6568836cee9.scope/hugetlb.2MB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.545531000Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf16c9eb383622fd7c3cc4026e467886d.slice/cri-containerd-0c06ca4e4069ec6b9caba6080fd1a53cf6b2f4856d57231a8ad4d6568836cee9.scope/hugetlb.1GB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.546357455Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a996c0b_b206_47a4_9c47_8f7137de1f1a.slice/cri-containerd-bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f.scope/hugetlb.2MB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.546451421Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a996c0b_b206_47a4_9c47_8f7137de1f1a.slice/cri-containerd-bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f.scope/hugetlb.1GB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.547289331Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1042ef16f3b4ac85da716dadc81299.slice/cri-containerd-ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640.scope/hugetlb.2MB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.547394847Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1042ef16f3b4ac85da716dadc81299.slice/cri-containerd-ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640.scope/hugetlb.1GB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.548212226Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod7dedf5d8_b6fd_49a9_9161_47d438f3fc4e.slice/cri-containerd-5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f.scope/hugetlb.2MB.events\""
	Dec 07 23:19:12 functional-372098 containerd[3768]: time="2025-12-07T23:19:12.548309142Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod7dedf5d8_b6fd_49a9_9161_47d438f3fc4e.slice/cri-containerd-5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f.scope/hugetlb.1GB.events\""
	
	
	==> coredns [d8af143545632d45bdb3a5db702a65d9012e593185181be61a7118fe16665a1c] <==
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:42358 - 39813 "HINFO IN 2893170015885620547.3879998501473278557. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.879961652s
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> coredns [e4204dc06ed5fc87b4fd716a7b639fe6349d52dc60d8e9c331ac181704dac256] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44361 - 37708 "HINFO IN 8252574301961499388.8195395475701870576. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.085806688s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-372098
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-372098
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=functional-372098
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_02_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:02:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-372098
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:19:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:19:10 +0000   Sun, 07 Dec 2025 23:02:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:19:10 +0000   Sun, 07 Dec 2025 23:02:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:19:10 +0000   Sun, 07 Dec 2025 23:02:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:19:10 +0000   Sun, 07 Dec 2025 23:02:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-372098
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                2f15a4e4-5557-42d7-a12d-8f37ddc06bdc
	  Boot ID:                    60b18ead-4d86-44c7-8f9b-425eebb82278
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-h9ctn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     hello-node-connect-9f67c86d4-rt6h6            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-844cf969f6-swxtx                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     15m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-7d764666f9-k5tm5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 etcd-functional-372098                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m
	  kube-system                 kindnet-t6d26                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-functional-372098              250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-functional-372098     200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-rsjbv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-functional-372098              100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-zbt2f    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-nghff          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  16m   node-controller  Node functional-372098 event: Registered Node functional-372098 in Controller
	  Normal  RegisteredNode  16m   node-controller  Node functional-372098 event: Registered Node functional-372098 in Controller
	
	
	==> dmesg <==
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 21:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 9d 47 68 73 f8 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 51 85 40 99 2c 08 06
	[Dec 7 22:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +0.003775] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 fe e4 4c 66 38 08 06
	[ +24.208165] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	[Dec 7 22:26] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 26 10 e0 03 10 08 06
	[  +0.000351] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +6.093825] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 95 3b ef b3 e0 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e ac 25 ad 6e 00 08 06
	[ +21.488023] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e e7 41 9a 21 38 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	
	
	==> etcd [0c06ca4e4069ec6b9caba6080fd1a53cf6b2f4856d57231a8ad4d6568836cee9] <==
	{"level":"warn","ts":"2025-12-07T23:03:12.277120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.291838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.298227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.304563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.312856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.318729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.325848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.331520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.344456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.351231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.357336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.363314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.369310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.375148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.388450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.395023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.401166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.407249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.457756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59714","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T23:13:11.999861Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1099}
	{"level":"info","ts":"2025-12-07T23:13:12.018777Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1099,"took":"18.599612ms","hash":2532742970,"current-db-size-bytes":3858432,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":1929216,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-12-07T23:13:12.018820Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2532742970,"revision":1099,"compact-revision":-1}
	{"level":"info","ts":"2025-12-07T23:18:12.004546Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1603}
	{"level":"info","ts":"2025-12-07T23:18:12.008388Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1603,"took":"3.494185ms","hash":2395020682,"current-db-size-bytes":3858432,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2580480,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2025-12-07T23:18:12.008423Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2395020682,"revision":1603,"compact-revision":1099}
	
	
	==> etcd [bb6c22ba4f99b3a5e0ad4f225337ef6d9c678ad8f673d0daa3576f8291e9becb] <==
	{"level":"warn","ts":"2025-12-07T23:02:14.265025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.271024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.288678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.294504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.300407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.306862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.353907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55492","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T23:03:09.553511Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-07T23:03:09.553622Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-372098","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-07T23:03:09.553773Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T23:03:09.555329Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T23:03:09.555392Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T23:03:09.555422Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-07T23:03:09.555453Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-07T23:03:09.555451Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555458Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555508Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-07T23:03:09.555518Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555498Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555542Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-07T23:03:09.555560Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T23:03:09.557307Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-07T23:03:09.557363Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T23:03:09.557392Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-07T23:03:09.557400Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-372098","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 23:19:20 up  2:01,  0 user,  load average: 0.09, 0.18, 0.31
	Linux functional-372098 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f] <==
	I1207 23:17:10.806799       1 main.go:301] handling current node
	I1207 23:17:20.808688       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:17:20.808737       1 main.go:301] handling current node
	I1207 23:17:30.815519       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:17:30.815557       1 main.go:301] handling current node
	I1207 23:17:40.807356       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:17:40.807386       1 main.go:301] handling current node
	I1207 23:17:50.808137       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:17:50.808173       1 main.go:301] handling current node
	I1207 23:18:00.807614       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:18:00.807652       1 main.go:301] handling current node
	I1207 23:18:10.807761       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:18:10.807789       1 main.go:301] handling current node
	I1207 23:18:20.807665       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:18:20.807722       1 main.go:301] handling current node
	I1207 23:18:30.815068       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:18:30.815113       1 main.go:301] handling current node
	I1207 23:18:40.807284       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:18:40.807342       1 main.go:301] handling current node
	I1207 23:18:50.807814       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:18:50.807846       1 main.go:301] handling current node
	I1207 23:19:00.806783       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:19:00.806821       1 main.go:301] handling current node
	I1207 23:19:10.806787       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:19:10.806840       1 main.go:301] handling current node
	
	
	==> kindnet [b4896657616d8a5bf9cd25346f14a5f0022fe77640a87ffd0ad1d235c1b32b21] <==
	I1207 23:02:23.611610       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:02:23.611876       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1207 23:02:23.612007       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:02:23.612023       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:02:23.612042       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:02:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:02:23.812318       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:02:23.812611       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:02:23.812779       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:02:23.813113       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:02:24.205586       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:02:24.205618       1 metrics.go:72] Registering metrics
	I1207 23:02:24.205710       1 controller.go:711] "Syncing nftables rules"
	I1207 23:02:33.816182       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:02:33.816279       1 main.go:301] handling current node
	I1207 23:02:43.816778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:02:43.816822       1 main.go:301] handling current node
	I1207 23:02:53.814190       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:02:53.814234       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889] <==
	I1207 23:03:12.895428       1 autoregister_controller.go:144] Starting autoregister controller
	I1207 23:03:12.895433       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 23:03:12.895439       1 cache.go:39] Caches are synced for autoregister controller
	I1207 23:03:12.900034       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 23:03:12.916725       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:03:12.917953       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:03:13.621946       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:03:13.798835       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1207 23:03:14.002955       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1207 23:03:14.004054       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:03:14.008020       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:03:14.479237       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 23:03:14.565406       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:03:14.609604       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:03:14.614746       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:03:26.657356       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:03:32.687174       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.66.15"}
	I1207 23:03:38.179099       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.192.100"}
	I1207 23:03:38.781988       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.9.0"}
	I1207 23:03:39.826372       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.28.61"}
	I1207 23:09:18.560342       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.155.3"}
	I1207 23:10:03.625772       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 23:10:03.720562       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.212.114"}
	I1207 23:10:03.731953       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.202.202"}
	I1207 23:13:12.837591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [27c569bb3458ad14783a895d6a725d0680973c56f076d8afc2b12298f8f95ba4] <==
	I1207 23:02:21.521818       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522069       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522103       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522229       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522340       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522614       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522650       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522682       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522732       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522819       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.523110       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.523382       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.523713       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.524650       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.524812       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.524910       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.525007       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.526069       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:02:21.528785       1 range_allocator.go:433] "Set node PodCIDR" node="functional-372098" podCIDRs=["10.244.0.0/24"]
	I1207 23:02:21.535609       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.622638       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.622653       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 23:02:21.622657       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 23:02:21.627127       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:36.682963       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-controller-manager [ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640] <==
	I1207 23:03:16.031183       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:03:16.031625       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.031943       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.032375       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033656       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033712       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033727       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033905       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033735       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034026       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034591       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034672       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034741       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.036367       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.038000       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.128372       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.128388       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 23:03:16.128393       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 23:03:16.132205       1 shared_informer.go:377] "Caches are synced"
	E1207 23:10:03.668806       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 23:10:03.672253       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 23:10:03.675555       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 23:10:03.680369       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 23:10:03.680408       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 23:10:03.684339       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [10581c840670f49559436a75d55800e9640286d59204ff2ba47af75f05b78f72] <==
	I1207 23:03:00.432540       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:03:00.497444       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:03:20.097782       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:20.097817       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 23:03:20.097903       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:03:20.118707       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:03:20.118766       1 server_linux.go:136] "Using iptables Proxier"
	I1207 23:03:20.124259       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:03:20.124567       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 23:03:20.124584       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:03:20.126112       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:03:20.126122       1 config.go:200] "Starting service config controller"
	I1207 23:03:20.126148       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:03:20.126133       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:03:20.126225       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:03:20.126240       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:03:20.126247       1 config.go:309] "Starting node config controller"
	I1207 23:03:20.126584       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:03:20.126592       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:03:20.226321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 23:03:20.226350       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:03:20.226821       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [91c336f27dcdb0f7a126951cd6bcba9bcba4bc625d39ad62a68d1683a1d4327a] <==
	I1207 23:02:23.207160       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:02:23.278330       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:02:23.378882       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:23.378921       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 23:02:23.379062       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:02:23.398024       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:02:23.398074       1 server_linux.go:136] "Using iptables Proxier"
	I1207 23:02:23.403163       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:02:23.403501       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 23:02:23.403527       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:02:23.405445       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:02:23.405465       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:02:23.405483       1 config.go:200] "Starting service config controller"
	I1207 23:02:23.405488       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:02:23.405499       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:02:23.405503       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:02:23.405547       1 config.go:309] "Starting node config controller"
	I1207 23:02:23.405552       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:02:23.505835       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:02:23.505858       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:02:23.505870       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 23:02:23.505883       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [67503af5d77f52fbbfacae3ab1b91c97dd4f963ee89dfadeae82d59c1895e8e6] <==
	I1207 23:03:11.407436       1 serving.go:386] Generated self-signed cert in-memory
	W1207 23:03:12.809581       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 23:03:12.809621       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 23:03:12.809633       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 23:03:12.809642       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 23:03:12.836375       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 23:03:12.836486       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:03:12.840380       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:03:12.840479       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:03:12.840534       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 23:03:12.840628       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 23:03:12.941464       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [def0377c6f0d18e86ee38cf410ad1b34f32ba4994e7e4e73c8c80acfcd0f81fd] <==
	E1207 23:02:15.676375       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:02:15.677255       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1207 23:02:15.702232       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1207 23:02:15.703102       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1207 23:02:15.772992       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:02:15.773938       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1207 23:02:15.776967       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1207 23:02:15.777880       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1207 23:02:15.808060       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1207 23:02:15.808975       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1207 23:02:15.828950       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1207 23:02:15.829754       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1207 23:02:15.847911       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1207 23:02:15.850936       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1207 23:02:15.898865       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:02:15.899689       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1207 23:02:15.951270       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1207 23:02:15.952209       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	I1207 23:02:17.937339       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:09.602953       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1207 23:03:09.603068       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1207 23:03:09.603098       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1207 23:03:09.603105       1 server.go:265] "[graceful-termination] secure server is exiting"
	I1207 23:03:09.603097       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1207 23:03:09.603128       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 07 23:18:43 functional-372098 kubelet[4673]: E1207 23:18:43.635390    4673 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nghff" containerName="kubernetes-dashboard"
	Dec 07 23:18:43 functional-372098 kubelet[4673]: E1207 23:18:43.636385    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-zbt2f" podUID="b536e949-6d38-400b-bece-d762f
111be94"
	Dec 07 23:18:43 functional-372098 kubelet[4673]: E1207 23:18:43.636633    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nghff" podUID="7558f757-a03e-4c8d-a71a-92ce283b4c06"
	Dec 07 23:18:44 functional-372098 kubelet[4673]: E1207 23:18:44.636552    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="07af374e-0a50-41d2-a6ec-bc5002beba0e"
	Dec 07 23:18:46 functional-372098 kubelet[4673]: E1207 23:18:46.635527    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-rt6h6" podUID="4120483f-8517-4e8a-87e9-d3efcbee6356"
	Dec 07 23:18:49 functional-372098 kubelet[4673]: E1207 23:18:49.635894    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="95c0efdc-87b6-4bd8-aef3-566618b1523e"
	Dec 07 23:18:51 functional-372098 kubelet[4673]: E1207 23:18:51.636207    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-swxtx" podUID="40f4d455-e467-4886-9c6d-8ee0145d3d14"
	Dec 07 23:18:54 functional-372098 kubelet[4673]: E1207 23:18:54.634917    4673 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nghff" containerName="kubernetes-dashboard"
	Dec 07 23:18:54 functional-372098 kubelet[4673]: E1207 23:18:54.636167    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nghff" podUID="7558f757-a03e-4c8d-a71a-92ce283b4c06"
	Dec 07 23:18:55 functional-372098 kubelet[4673]: E1207 23:18:55.635505    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-h9ctn" podUID="407fccce-e612-4fd6-9dea-5557b2a23bcb"
	Dec 07 23:18:57 functional-372098 kubelet[4673]: E1207 23:18:57.635400    4673 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-zbt2f" containerName="dashboard-metrics-scraper"
	Dec 07 23:18:57 functional-372098 kubelet[4673]: E1207 23:18:57.636059    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-rt6h6" podUID="4120483f-8517-4e8a-87e9-d3efcbee6356"
	Dec 07 23:18:57 functional-372098 kubelet[4673]: E1207 23:18:57.636503    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-zbt2f" podUID="b536e949-6d38-400b-bece-d762f
111be94"
	Dec 07 23:18:58 functional-372098 kubelet[4673]: E1207 23:18:58.639086    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="07af374e-0a50-41d2-a6ec-bc5002beba0e"
	Dec 07 23:19:04 functional-372098 kubelet[4673]: E1207 23:19:04.635674    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="95c0efdc-87b6-4bd8-aef3-566618b1523e"
	Dec 07 23:19:04 functional-372098 kubelet[4673]: E1207 23:19:04.636392    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-swxtx" podUID="40f4d455-e467-4886-9c6d-8ee0145d3d14"
	Dec 07 23:19:07 functional-372098 kubelet[4673]: E1207 23:19:07.634821    4673 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nghff" containerName="kubernetes-dashboard"
	Dec 07 23:19:07 functional-372098 kubelet[4673]: E1207 23:19:07.635332    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-h9ctn" podUID="407fccce-e612-4fd6-9dea-5557b2a23bcb"
	Dec 07 23:19:07 functional-372098 kubelet[4673]: E1207 23:19:07.636017    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nghff" podUID="7558f757-a03e-4c8d-a71a-92ce283b4c06"
	Dec 07 23:19:09 functional-372098 kubelet[4673]: E1207 23:19:09.635941    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-rt6h6" podUID="4120483f-8517-4e8a-87e9-d3efcbee6356"
	Dec 07 23:19:09 functional-372098 kubelet[4673]: E1207 23:19:09.636423    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="07af374e-0a50-41d2-a6ec-bc5002beba0e"
	Dec 07 23:19:10 functional-372098 kubelet[4673]: E1207 23:19:10.635416    4673 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-zbt2f" containerName="dashboard-metrics-scraper"
	Dec 07 23:19:10 functional-372098 kubelet[4673]: E1207 23:19:10.636561    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-zbt2f" podUID="b536e949-6d38-400b-bece-d762f
111be94"
	Dec 07 23:19:15 functional-372098 kubelet[4673]: E1207 23:19:15.636365    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-swxtx" podUID="40f4d455-e467-4886-9c6d-8ee0145d3d14"
	Dec 07 23:19:19 functional-372098 kubelet[4673]: E1207 23:19:19.635078    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="95c0efdc-87b6-4bd8-aef3-566618b1523e"
	
	
	==> storage-provisioner [77c5419a5af32a3adbfdb08f245cbdfbee4d4594dda757a4a341591f0f5757ef] <==
	I1207 23:02:34.473761       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-372098_2723da14-e3ab-463a-b591-bd211f668a0a!
	W1207 23:02:36.381294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:36.384835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:38.387295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:38.390656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:40.394077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:40.397975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:42.400909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:42.405558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:44.409032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:44.412633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:46.416254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:46.420950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:48.424007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:48.428978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:50.431882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:50.436004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:52.438706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:52.442377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:54.445344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:54.450611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:56.453177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:56.457032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:58.459733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:58.463288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f] <==
	W1207 23:18:56.011580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:18:58.014882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:18:58.018480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:00.021146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:00.025176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:02.027797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:02.032373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:04.035976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:04.040458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:06.043188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:06.046821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:08.050164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:08.053927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:10.056918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:10.061409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:12.063729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:12.067272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:14.070177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:14.073812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:16.076683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:16.080364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:18.083565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:18.088457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:20.091574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:19:20.097051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-372098 -n functional-372098
helpers_test.go:269: (dbg) Run:  kubectl --context functional-372098 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-h9ctn hello-node-connect-9f67c86d4-rt6h6 mysql-844cf969f6-swxtx nginx-svc sp-pod dashboard-metrics-scraper-5565989548-zbt2f kubernetes-dashboard-b84665fb8-nghff
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-372098 describe pod busybox-mount hello-node-5758569b79-h9ctn hello-node-connect-9f67c86d4-rt6h6 mysql-844cf969f6-swxtx nginx-svc sp-pod dashboard-metrics-scraper-5565989548-zbt2f kubernetes-dashboard-b84665fb8-nghff
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-372098 describe pod busybox-mount hello-node-5758569b79-h9ctn hello-node-connect-9f67c86d4-rt6h6 mysql-844cf969f6-swxtx nginx-svc sp-pod dashboard-metrics-scraper-5565989548-zbt2f kubernetes-dashboard-b84665fb8-nghff: exit status 1 (91.82026ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:09:54 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://e14f9112ffcff7df1cd79e835cdc327efffd87d59175b498d92b80986079fea1
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 07 Dec 2025 23:09:55 +0000
	      Finished:     Sun, 07 Dec 2025 23:09:55 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zfsq7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zfsq7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m27s  default-scheduler  Successfully assigned default/busybox-mount to functional-372098
	  Normal  Pulling    9m26s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m26s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 625ms (625ms including waiting). Image size: 2395207 bytes.
	  Normal  Created    9m26s  kubelet            Container created
	  Normal  Started    9m26s  kubelet            Container started
	
	
	Name:             hello-node-5758569b79-h9ctn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:38 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bq8h8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bq8h8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/hello-node-5758569b79-h9ctn to functional-372098
	  Normal   Pulling    12m (x5 over 15m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     12m (x5 over 15m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   12m (x5 over 15m)   kubelet  Error: ErrImagePull
	  Normal   BackOff  40s (x64 over 15m)  kubelet  Back-off pulling image "kicbase/echo-server"
	  Warning  Failed   40s (x64 over 15m)  kubelet  Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-9f67c86d4-rt6h6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:09:18 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tsmdx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tsmdx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-rt6h6 to functional-372098
	  Normal   Pulling    7m7s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m6s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   7m6s (x5 over 10m)    kubelet  Error: ErrImagePull
	  Warning  Failed   4m50s (x20 over 10m)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m36s (x21 over 10m)  kubelet  Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-844cf969f6-swxtx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:38 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2n2f6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2n2f6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/mysql-844cf969f6-swxtx to functional-372098
	  Warning  Failed     14m (x3 over 15m)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  12m (x5 over 15m)  kubelet  Pulling image "docker.io/mysql:5.7"
	  Warning  Failed   12m (x2 over 15m)  kubelet  Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   12m (x5 over 15m)   kubelet  Error: ErrImagePull
	  Normal   BackOff  30s (x65 over 15m)  kubelet  Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed   30s (x65 over 15m)  kubelet  Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:39 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8cjdf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8cjdf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/nginx-svc to functional-372098
	  Normal   Pulling    12m (x5 over 15m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     12m (x5 over 15m)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   12m (x5 over 15m)   kubelet  Error: ErrImagePull
	  Normal   BackOff  37s (x65 over 15m)  kubelet  Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed   37s (x65 over 15m)  kubelet  Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:48 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b56ct (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-b56ct:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/sp-pod to functional-372098
	  Normal   Pulling    12m (x5 over 15m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     12m (x5 over 15m)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   12m (x5 over 15m)   kubelet  Error: ErrImagePull
	  Normal   BackOff  32s (x66 over 15m)  kubelet  Back-off pulling image "docker.io/nginx"
	  Warning  Failed   17s (x67 over 15m)  kubelet  Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-zbt2f" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-nghff" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-372098 describe pod busybox-mount hello-node-5758569b79-h9ctn hello-node-connect-9f67c86d4-rt6h6 mysql-844cf969f6-swxtx nginx-svc sp-pod dashboard-metrics-scraper-5565989548-zbt2f kubernetes-dashboard-b84665fb8-nghff: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (367.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [5a996c0b-b206-47a4-9c47-8f7137de1f1a] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.002643197s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-372098 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-372098 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-372098 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-372098 apply -f testdata/storage-provisioner/pod.yaml
I1207 23:03:48.700166  375863 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [95c0efdc-87b6-4bd8-aef3-566618b1523e] Pending
helpers_test.go:352: "sp-pod" [95c0efdc-87b6-4bd8-aef3-566618b1523e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1207 23:05:54.191776  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:54.198130  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:54.209480  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:54.231763  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:54.273097  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:54.354475  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:54.516058  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:54.837449  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:55.479116  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:56.761077  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:59.322903  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:06:04.444228  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:06:14.686526  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:06:35.168295  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:07:16.130258  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-372098 -n functional-372098
functional_test_pvc_test.go:140: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-12-07 23:09:49.006198352 +0000 UTC m=+2329.158864425
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-372098 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-372098 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-372098/192.168.49.2
Start Time:       Sun, 07 Dec 2025 23:03:48 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b56ct (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-b56ct:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m1s                  default-scheduler  Successfully assigned default/sp-pod to functional-372098
Normal   Pulling    3m4s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m3s (x5 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   3m3s (x5 over 5m59s)  kubelet  Error: ErrImagePull
Normal   BackOff  55s (x21 over 5m59s)  kubelet  Back-off pulling image "docker.io/nginx"
Warning  Failed   55s (x21 over 5m59s)  kubelet  Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-372098 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-372098 logs sp-pod -n default: exit status 1 (64.141654ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-372098 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-372098
helpers_test.go:243: (dbg) docker inspect functional-372098:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0",
	        "Created": "2025-12-07T23:02:06.580313062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 439416,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:02:06.615285569Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/hostname",
	        "HostsPath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/hosts",
	        "LogPath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0-json.log",
	        "Name": "/functional-372098",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-372098:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-372098",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0",
	                "LowerDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff-init/diff:/var/lib/docker/overlay2/3cf0cc8fa8911fedbf601d360c4398569b0f30cd394076654dad88848c4dafaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-372098",
	                "Source": "/var/lib/docker/volumes/functional-372098/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-372098",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-372098",
	                "name.minikube.sigs.k8s.io": "functional-372098",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a2fa3a3be74c47da6ccc47461bbc5b06579e767397576b1af8ed9fd637c04ccf",
	            "SandboxKey": "/var/run/docker/netns/a2fa3a3be74c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-372098": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f9ed146fed112ca9fd37afbd507e6575957a4ed48d7066777c5186f06129931c",
	                    "EndpointID": "acda0ebebf0dc04b91546f7aa782adebebea917dc5ceb5777c345b196684711a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "16:b1:7a:3d:6a:a7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-372098",
	                        "24bb4d0c1a95"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-372098 -n functional-372098
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-372098 logs -n 25: (1.14231337s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                 ARGS                                                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-372098 ssh sudo cat /etc/ssl/certs/3758632.pem                                                                                                             │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ ssh     │ functional-372098 ssh -n functional-372098 sudo cat /home/docker/cp-test.txt                                                                                          │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │                     │
	│ image   │ functional-372098 image load --daemon kicbase/echo-server:functional-372098 --alsologtostderr                                                                         │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ cp      │ functional-372098 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                             │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ ssh     │ functional-372098 ssh sudo cat /usr/share/ca-certificates/3758632.pem                                                                                                 │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ ssh     │ functional-372098 ssh -n functional-372098 sudo cat /tmp/does/not/exist/cp-test.txt                                                                                   │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │                     │
	│ ssh     │ functional-372098 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                              │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ ssh     │ functional-372098 ssh echo hello                                                                                                                                      │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ image   │ functional-372098 image ls                                                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ ssh     │ functional-372098 ssh cat /etc/hostname                                                                                                                               │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ image   │ functional-372098 image load --daemon kicbase/echo-server:functional-372098 --alsologtostderr                                                                         │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ tunnel  │ functional-372098 tunnel --alsologtostderr                                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │                     │
	│ tunnel  │ functional-372098 tunnel --alsologtostderr                                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │                     │
	│ tunnel  │ functional-372098 tunnel --alsologtostderr                                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │                     │
	│ image   │ functional-372098 image ls                                                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ image   │ functional-372098 image load --daemon kicbase/echo-server:functional-372098 --alsologtostderr                                                                         │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ image   │ functional-372098 image ls                                                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ image   │ functional-372098 image save kicbase/echo-server:functional-372098 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ image   │ functional-372098 image rm kicbase/echo-server:functional-372098 --alsologtostderr                                                                                    │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ image   │ functional-372098 image ls                                                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ image   │ functional-372098 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr                                       │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ image   │ functional-372098 image ls                                                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ image   │ functional-372098 image save --daemon kicbase/echo-server:functional-372098 --alsologtostderr                                                                         │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:03 UTC │ 07 Dec 25 23:03 UTC │
	│ addons  │ functional-372098 addons list                                                                                                                                         │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ addons  │ functional-372098 addons list -o json                                                                                                                                 │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:02:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:02:50.574676  444582 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:02:50.574924  444582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:02:50.574927  444582 out.go:374] Setting ErrFile to fd 2...
	I1207 23:02:50.574930  444582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:02:50.575097  444582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 23:02:50.575487  444582 out.go:368] Setting JSON to false
	I1207 23:02:50.576430  444582 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6319,"bootTime":1765142251,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:02:50.576473  444582 start.go:143] virtualization: kvm guest
	I1207 23:02:50.578138  444582 out.go:179] * [functional-372098] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:02:50.579262  444582 notify.go:221] Checking for updates...
	I1207 23:02:50.579271  444582 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:02:50.580408  444582 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:02:50.581641  444582 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 23:02:50.582864  444582 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 23:02:50.586158  444582 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:02:50.587252  444582 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:02:50.588668  444582 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1207 23:02:50.588758  444582 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:02:50.611927  444582 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:02:50.611992  444582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:02:50.665036  444582 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-07 23:02:50.656067226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:02:50.665134  444582 docker.go:319] overlay module found
	I1207 23:02:50.666878  444582 out.go:179] * Using the docker driver based on existing profile
	I1207 23:02:50.667865  444582 start.go:309] selected driver: docker
	I1207 23:02:50.667874  444582 start.go:927] validating driver "docker" against &{Name:functional-372098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-372098 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:02:50.667961  444582 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:02:50.668038  444582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:02:50.720224  444582 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-07 23:02:50.711159326 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:02:50.720863  444582 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:02:50.720896  444582 cni.go:84] Creating CNI manager for ""
	I1207 23:02:50.720942  444582 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 23:02:50.720989  444582 start.go:353] cluster config:
	{Name:functional-372098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-372098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:02:50.722679  444582 out.go:179] * Starting "functional-372098" primary control-plane node in "functional-372098" cluster
	I1207 23:02:50.723799  444582 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1207 23:02:50.724859  444582 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:02:50.725910  444582 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1207 23:02:50.725932  444582 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4
	I1207 23:02:50.725939  444582 cache.go:65] Caching tarball of preloaded images
	I1207 23:02:50.726004  444582 preload.go:238] Found /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1207 23:02:50.726010  444582 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1207 23:02:50.726017  444582 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:02:50.726095  444582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/config.json ...
	I1207 23:02:50.745121  444582 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:02:50.745131  444582 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:02:50.745145  444582 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:02:50.745170  444582 start.go:360] acquireMachinesLock for functional-372098: {Name:mka5676c616fb12a30a2c0921e7e897a85a66bf4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:02:50.745231  444582 start.go:364] duration metric: took 39.28µs to acquireMachinesLock for "functional-372098"
	I1207 23:02:50.745244  444582 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:02:50.745248  444582 fix.go:54] fixHost starting: 
	I1207 23:02:50.745446  444582 cli_runner.go:164] Run: docker container inspect functional-372098 --format={{.State.Status}}
	I1207 23:02:50.761888  444582 fix.go:112] recreateIfNeeded on functional-372098: state=Running err=<nil>
	W1207 23:02:50.761903  444582 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:02:50.763413  444582 out.go:252] * Updating the running docker "functional-372098" container ...
	I1207 23:02:50.763430  444582 machine.go:94] provisionDockerMachine start ...
	I1207 23:02:50.763477  444582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
	I1207 23:02:50.779753  444582 main.go:143] libmachine: Using SSH client type: native
	I1207 23:02:50.779982  444582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33165 <nil> <nil>}
	I1207 23:02:50.779987  444582 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:02:50.903929  444582 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-372098
	
	I1207 23:02:50.903950  444582 ubuntu.go:182] provisioning hostname "functional-372098"
	I1207 23:02:50.904007  444582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
	I1207 23:02:50.921176  444582 main.go:143] libmachine: Using SSH client type: native
	I1207 23:02:50.921384  444582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33165 <nil> <nil>}
	I1207 23:02:50.921391  444582 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-372098 && echo "functional-372098" | sudo tee /etc/hostname
	I1207 23:02:51.054188  444582 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-372098
	
	I1207 23:02:51.054258  444582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
	I1207 23:02:51.072105  444582 main.go:143] libmachine: Using SSH client type: native
	I1207 23:02:51.072315  444582 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33165 <nil> <nil>}
	I1207 23:02:51.072325  444582 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-372098' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-372098/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-372098' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:02:51.196661  444582 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:02:51.196677  444582 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-372312/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-372312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-372312/.minikube}
	I1207 23:02:51.196711  444582 ubuntu.go:190] setting up certificates
	I1207 23:02:51.196722  444582 provision.go:84] configureAuth start
	I1207 23:02:51.196769  444582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-372098
	I1207 23:02:51.214075  444582 provision.go:143] copyHostCerts
	I1207 23:02:51.214120  444582 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-372312/.minikube/ca.pem, removing ...
	I1207 23:02:51.214133  444582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-372312/.minikube/ca.pem
	I1207 23:02:51.214194  444582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-372312/.minikube/ca.pem (1082 bytes)
	I1207 23:02:51.214299  444582 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-372312/.minikube/cert.pem, removing ...
	I1207 23:02:51.214303  444582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-372312/.minikube/cert.pem
	I1207 23:02:51.214327  444582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-372312/.minikube/cert.pem (1123 bytes)
	I1207 23:02:51.214388  444582 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-372312/.minikube/key.pem, removing ...
	I1207 23:02:51.214390  444582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-372312/.minikube/key.pem
	I1207 23:02:51.214411  444582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-372312/.minikube/key.pem (1679 bytes)
	I1207 23:02:51.214467  444582 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-372312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca-key.pem org=jenkins.functional-372098 san=[127.0.0.1 192.168.49.2 functional-372098 localhost minikube]
	I1207 23:02:51.573306  444582 provision.go:177] copyRemoteCerts
	I1207 23:02:51.573354  444582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:02:51.573389  444582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
	I1207 23:02:51.590571  444582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-372098/id_rsa Username:docker}
	I1207 23:02:51.682231  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:02:51.699677  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1207 23:02:51.715856  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 23:02:51.732002  444582 provision.go:87] duration metric: took 535.269189ms to configureAuth
	I1207 23:02:51.732019  444582 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:02:51.732201  444582 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1207 23:02:51.732207  444582 machine.go:97] duration metric: took 968.772882ms to provisionDockerMachine
	I1207 23:02:51.732214  444582 start.go:293] postStartSetup for "functional-372098" (driver="docker")
	I1207 23:02:51.732223  444582 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:02:51.732258  444582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:02:51.732294  444582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
	I1207 23:02:51.749734  444582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-372098/id_rsa Username:docker}
	I1207 23:02:51.841489  444582 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:02:51.844790  444582 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:02:51.844806  444582 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:02:51.844814  444582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-372312/.minikube/addons for local assets ...
	I1207 23:02:51.844860  444582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-372312/.minikube/files for local assets ...
	I1207 23:02:51.844945  444582 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-372312/.minikube/files/etc/ssl/certs/3758632.pem -> 3758632.pem in /etc/ssl/certs
	I1207 23:02:51.845011  444582 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-372312/.minikube/files/etc/test/nested/copy/375863/hosts -> hosts in /etc/test/nested/copy/375863
	I1207 23:02:51.845070  444582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/375863
	I1207 23:02:51.852139  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/files/etc/ssl/certs/3758632.pem --> /etc/ssl/certs/3758632.pem (1708 bytes)
	I1207 23:02:51.868417  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/files/etc/test/nested/copy/375863/hosts --> /etc/test/nested/copy/375863/hosts (40 bytes)
	I1207 23:02:51.884497  444582 start.go:296] duration metric: took 152.274464ms for postStartSetup
	I1207 23:02:51.884557  444582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:02:51.884639  444582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
	I1207 23:02:51.901757  444582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-372098/id_rsa Username:docker}
	I1207 23:02:51.990319  444582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:02:51.994612  444582 fix.go:56] duration metric: took 1.249357013s for fixHost
	I1207 23:02:51.994629  444582 start.go:83] releasing machines lock for "functional-372098", held for 1.249390493s
	I1207 23:02:51.994680  444582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-372098
	I1207 23:02:52.011855  444582 ssh_runner.go:195] Run: cat /version.json
	I1207 23:02:52.011887  444582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
	I1207 23:02:52.011931  444582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:02:52.011975  444582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
	I1207 23:02:52.031110  444582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-372098/id_rsa Username:docker}
	I1207 23:02:52.031432  444582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-372098/id_rsa Username:docker}
	I1207 23:02:52.119405  444582 ssh_runner.go:195] Run: systemctl --version
	I1207 23:02:52.174920  444582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:02:52.179436  444582 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:02:52.179490  444582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:02:52.187364  444582 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:02:52.187376  444582 start.go:496] detecting cgroup driver to use...
	I1207 23:02:52.187402  444582 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:02:52.187434  444582 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1207 23:02:52.201207  444582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1207 23:02:52.212714  444582 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:02:52.212746  444582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:02:52.226673  444582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:02:52.238020  444582 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:02:52.356855  444582 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:02:52.479355  444582 docker.go:234] disabling docker service ...
	I1207 23:02:52.479406  444582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:02:52.493314  444582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:02:52.505116  444582 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:02:52.623983  444582 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:02:52.745173  444582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:02:52.756877  444582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:02:52.770356  444582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1207 23:02:52.778676  444582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1207 23:02:52.786636  444582 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1207 23:02:52.786702  444582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1207 23:02:52.794591  444582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 23:02:52.802438  444582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1207 23:02:52.810381  444582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 23:02:52.818168  444582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:02:52.825520  444582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1207 23:02:52.833490  444582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1207 23:02:52.841631  444582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1207 23:02:52.850004  444582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:02:52.856851  444582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:02:52.863597  444582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:02:52.974891  444582 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1207 23:02:53.121917  444582 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1207 23:02:53.121979  444582 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1207 23:02:53.126016  444582 start.go:564] Will wait 60s for crictl version
	I1207 23:02:53.126061  444582 ssh_runner.go:195] Run: which crictl
	I1207 23:02:53.129587  444582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:02:53.153180  444582 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1207 23:02:53.153232  444582 ssh_runner.go:195] Run: containerd --version
	I1207 23:02:53.173524  444582 ssh_runner.go:195] Run: containerd --version
	I1207 23:02:53.196189  444582 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1207 23:02:53.197426  444582 cli_runner.go:164] Run: docker network inspect functional-372098 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:02:53.214439  444582 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 23:02:53.219776  444582 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1207 23:02:53.220796  444582 kubeadm.go:884] updating cluster {Name:functional-372098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-372098 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:02:53.220928  444582 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1207 23:02:53.220991  444582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:02:53.245894  444582 containerd.go:627] all images are preloaded for containerd runtime.
	I1207 23:02:53.245903  444582 containerd.go:534] Images already preloaded, skipping extraction
	I1207 23:02:53.245947  444582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:02:53.269618  444582 containerd.go:627] all images are preloaded for containerd runtime.
	I1207 23:02:53.269628  444582 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:02:53.269634  444582 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1207 23:02:53.269753  444582 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-372098 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-372098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:02:53.269808  444582 ssh_runner.go:195] Run: sudo crictl info
	I1207 23:02:53.292623  444582 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1207 23:02:53.292637  444582 cni.go:84] Creating CNI manager for ""
	I1207 23:02:53.292645  444582 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 23:02:53.292652  444582 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:02:53.292672  444582 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-372098 NodeName:functional-372098 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubele
tConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:02:53.292803  444582 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-372098"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:02:53.292853  444582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1207 23:02:53.300232  444582 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:02:53.300272  444582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:02:53.307735  444582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1207 23:02:53.319393  444582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1207 23:02:53.331217  444582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2086 bytes)
	I1207 23:02:53.343032  444582 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:02:53.346293  444582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:02:53.462470  444582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:02:53.474269  444582 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098 for IP: 192.168.49.2
	I1207 23:02:53.474281  444582 certs.go:195] generating shared ca certs ...
	I1207 23:02:53.474298  444582 certs.go:227] acquiring lock for ca certs: {Name:mkfbf8cc1b70595007aee06919ab1fb852308b54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:02:53.474455  444582 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-372312/.minikube/ca.key
	I1207 23:02:53.474507  444582 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.key
	I1207 23:02:53.474513  444582 certs.go:257] generating profile certs ...
	I1207 23:02:53.474633  444582 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.key
	I1207 23:02:53.474689  444582 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/apiserver.key.ed2022f9
	I1207 23:02:53.474764  444582 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/proxy-client.key
	I1207 23:02:53.474905  444582 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/375863.pem (1338 bytes)
	W1207 23:02:53.474941  444582 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-372312/.minikube/certs/375863_empty.pem, impossibly tiny 0 bytes
	I1207 23:02:53.474950  444582 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 23:02:53.474981  444582 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:02:53.475012  444582 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:02:53.475040  444582 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/certs/key.pem (1679 bytes)
	I1207 23:02:53.475090  444582 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-372312/.minikube/files/etc/ssl/certs/3758632.pem (1708 bytes)
	I1207 23:02:53.475877  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:02:53.492712  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 23:02:53.509024  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:02:53.525140  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1207 23:02:53.541412  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1207 23:02:53.557291  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:02:53.573505  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:02:53.589351  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:02:53.606231  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/files/etc/ssl/certs/3758632.pem --> /usr/share/ca-certificates/3758632.pem (1708 bytes)
	I1207 23:02:53.622058  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:02:53.637922  444582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-372312/.minikube/certs/375863.pem --> /usr/share/ca-certificates/375863.pem (1338 bytes)
	I1207 23:02:53.653885  444582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:02:53.665430  444582 ssh_runner.go:195] Run: openssl version
	I1207 23:02:53.671097  444582 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3758632.pem
	I1207 23:02:53.677849  444582 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3758632.pem /etc/ssl/certs/3758632.pem
	I1207 23:02:53.684567  444582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758632.pem
	I1207 23:02:53.687951  444582 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:02 /usr/share/ca-certificates/3758632.pem
	I1207 23:02:53.687988  444582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758632.pem
	I1207 23:02:53.721708  444582 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:02:53.728685  444582 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:02:53.735311  444582 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:02:53.741963  444582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:02:53.745164  444582 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:31 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:02:53.745196  444582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:02:53.779173  444582 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:02:53.785809  444582 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375863.pem
	I1207 23:02:53.792303  444582 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375863.pem /etc/ssl/certs/375863.pem
	I1207 23:02:53.799002  444582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375863.pem
	I1207 23:02:53.802250  444582 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:02 /usr/share/ca-certificates/375863.pem
	I1207 23:02:53.802280  444582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375863.pem
	I1207 23:02:53.836245  444582 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:02:53.842933  444582 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:02:53.846471  444582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:02:53.879438  444582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:02:53.912676  444582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:02:53.945455  444582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:02:53.977961  444582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:02:54.011274  444582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:02:54.043863  444582 kubeadm.go:401] StartCluster: {Name:functional-372098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-372098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:02:54.043943  444582 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1207 23:02:54.043977  444582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:02:54.069922  444582 cri.go:89] found id: "e4204dc06ed5fc87b4fd716a7b639fe6349d52dc60d8e9c331ac181704dac256"
	I1207 23:02:54.069934  444582 cri.go:89] found id: "77c5419a5af32a3adbfdb08f245cbdfbee4d4594dda757a4a341591f0f5757ef"
	I1207 23:02:54.069939  444582 cri.go:89] found id: "b4896657616d8a5bf9cd25346f14a5f0022fe77640a87ffd0ad1d235c1b32b21"
	I1207 23:02:54.069942  444582 cri.go:89] found id: "91c336f27dcdb0f7a126951cd6bcba9bcba4bc625d39ad62a68d1683a1d4327a"
	I1207 23:02:54.069946  444582 cri.go:89] found id: "6926e6c223eb3fc773f1e27c851b360300fdb9df3cea5c69f432abdb539590ce"
	I1207 23:02:54.069949  444582 cri.go:89] found id: "27c569bb3458ad14783a895d6a725d0680973c56f076d8afc2b12298f8f95ba4"
	I1207 23:02:54.069951  444582 cri.go:89] found id: "bb6c22ba4f99b3a5e0ad4f225337ef6d9c678ad8f673d0daa3576f8291e9becb"
	I1207 23:02:54.069953  444582 cri.go:89] found id: "def0377c6f0d18e86ee38cf410ad1b34f32ba4994e7e4e73c8c80acfcd0f81fd"
	I1207 23:02:54.069955  444582 cri.go:89] found id: ""
	I1207 23:02:54.069995  444582 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1207 23:02:54.093673  444582 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"0e45f2e1f526fc4330d5a6e32baad4f4a765eec2a59b2d2f06613b16514e34d5","pid":1204,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e45f2e1f526fc4330d5a6e32baad4f4a765eec2a59b2d2f06613b16514e34d5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e45f2e1f526fc4330d5a6e32baad4f4a765eec2a59b2d2f06613b16514e34d5/rootfs","created":"2025-12-07T23:02:12.882314198Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0e45f2e1f526fc4330d5a6e32baad4f4a765eec2a59b2d2f06613b16514e34d5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-372098_16d8879b80172ef0b147ae38507cc833","io.kubernetes.cri.sandbox-memo
ry":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-372098","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"16d8879b80172ef0b147ae38507cc833"},"owner":"root"},{"ociVersion":"1.2.1","id":"27c569bb3458ad14783a895d6a725d0680973c56f076d8afc2b12298f8f95ba4","pid":1326,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/27c569bb3458ad14783a895d6a725d0680973c56f076d8afc2b12298f8f95ba4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/27c569bb3458ad14783a895d6a725d0680973c56f076d8afc2b12298f8f95ba4/rootfs","created":"2025-12-07T23:02:13.003960435Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0","io.kubernetes.cri.sandbox-id":"85198daee096f6391d2bd792d71032bf82d6ed222c13a319e974f1c71a5d6527","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-37
2098","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2a1042ef16f3b4ac85da716dadc81299"},"owner":"root"},{"ociVersion":"1.2.1","id":"3bc2e449748ff3987c1f2e9b60c3d1d6d2e5bb8af52e8183c995815cf8009aaf","pid":1226,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3bc2e449748ff3987c1f2e9b60c3d1d6d2e5bb8af52e8183c995815cf8009aaf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3bc2e449748ff3987c1f2e9b60c3d1d6d2e5bb8af52e8183c995815cf8009aaf/rootfs","created":"2025-12-07T23:02:12.886590624Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"3bc2e449748ff3987c1f2e9b60c3d1d6d2e5bb8af52e8183c995815cf8009aaf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-
functional-372098_2dacf9c2edf1f2325e6badfe4510728a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-372098","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2dacf9c2edf1f2325e6badfe4510728a"},"owner":"root"},{"ociVersion":"1.2.1","id":"4d67e174203fb4a05d1fa7cfe38c061f5fbfd3315837894bc0495c0b90e42de3","pid":1216,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d67e174203fb4a05d1fa7cfe38c061f5fbfd3315837894bc0495c0b90e42de3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d67e174203fb4a05d1fa7cfe38c061f5fbfd3315837894bc0495c0b90e42de3/rootfs","created":"2025-12-07T23:02:12.883939878Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id
":"4d67e174203fb4a05d1fa7cfe38c061f5fbfd3315837894bc0495c0b90e42de3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-372098_f16c9eb383622fd7c3cc4026e467886d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-372098","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f16c9eb383622fd7c3cc4026e467886d"},"owner":"root"},{"ociVersion":"1.2.1","id":"6926e6c223eb3fc773f1e27c851b360300fdb9df3cea5c69f432abdb539590ce","pid":1345,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6926e6c223eb3fc773f1e27c851b360300fdb9df3cea5c69f432abdb539590ce","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6926e6c223eb3fc773f1e27c851b360300fdb9df3cea5c69f432abdb539590ce/rootfs","created":"2025-12-07T23:02:13.009090355Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/ku
be-apiserver:v1.35.0-beta.0","io.kubernetes.cri.sandbox-id":"3bc2e449748ff3987c1f2e9b60c3d1d6d2e5bb8af52e8183c995815cf8009aaf","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-372098","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2dacf9c2edf1f2325e6badfe4510728a"},"owner":"root"},{"ociVersion":"1.2.1","id":"77c5419a5af32a3adbfdb08f245cbdfbee4d4594dda757a4a341591f0f5757ef","pid":2171,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/77c5419a5af32a3adbfdb08f245cbdfbee4d4594dda757a4a341591f0f5757ef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/77c5419a5af32a3adbfdb08f245cbdfbee4d4594dda757a4a341591f0f5757ef/rootfs","created":"2025-12-07T23:02:34.333797172Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"f498869d5e01e30ecfd13baf20a1f09a3f
465257ae71392b559601e8794de20d","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5a996c0b-b206-47a4-9c47-8f7137de1f1a"},"owner":"root"},{"ociVersion":"1.2.1","id":"85198daee096f6391d2bd792d71032bf82d6ed222c13a319e974f1c71a5d6527","pid":1234,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/85198daee096f6391d2bd792d71032bf82d6ed222c13a319e974f1c71a5d6527","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/85198daee096f6391d2bd792d71032bf82d6ed222c13a319e974f1c71a5d6527/rootfs","created":"2025-12-07T23:02:12.891390453Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"85198daee096f6391d2bd792d71032bf82d6ed222c13a319e974f1c71a5d6527"
,"io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-372098_2a1042ef16f3b4ac85da716dadc81299","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-372098","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2a1042ef16f3b4ac85da716dadc81299"},"owner":"root"},{"ociVersion":"1.2.1","id":"887f2a72603086796877cfacc8122c98f12d6ab0fc7c64c5982089d8f7b4d520","pid":2128,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/887f2a72603086796877cfacc8122c98f12d6ab0fc7c64c5982089d8f7b4d520","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/887f2a72603086796877cfacc8122c98f12d6ab0fc7c64c5982089d8f7b4d520/rootfs","created":"2025-12-07T23:02:34.244842116Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernete
s.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"887f2a72603086796877cfacc8122c98f12d6ab0fc7c64c5982089d8f7b4d520","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-7d764666f9-k5tm5_804b8ebe-b61a-4547-874f-b95e8add978c","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-7d764666f9-k5tm5","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"804b8ebe-b61a-4547-874f-b95e8add978c"},"owner":"root"},{"ociVersion":"1.2.1","id":"91c336f27dcdb0f7a126951cd6bcba9bcba4bc625d39ad62a68d1683a1d4327a","pid":1810,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91c336f27dcdb0f7a126951cd6bcba9bcba4bc625d39ad62a68d1683a1d4327a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91c336f27dcdb0f7a126951cd6bcba9bcba4bc625d39ad62a68d1683a1d4327a/rootfs","created":"2025-12-07T23:02:23.163519357Z","annotations":{"io.kubernetes.cri.contain
er-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.35.0-beta.0","io.kubernetes.cri.sandbox-id":"dea1d5f421a2ca57e38ad717c491f9798b3aa75ae01c507fbecc995f67bbd2ad","io.kubernetes.cri.sandbox-name":"kube-proxy-rsjbv","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"52f5009c-b0c2-4c05-afea-472b842f9ef4"},"owner":"root"},{"ociVersion":"1.2.1","id":"9e6aa8abdee4b6468466863ac7e3bb067937fde4779d3e50b22ac50eda1d662c","pid":1785,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e6aa8abdee4b6468466863ac7e3bb067937fde4779d3e50b22ac50eda1d662c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e6aa8abdee4b6468466863ac7e3bb067937fde4779d3e50b22ac50eda1d662c/rootfs","created":"2025-12-07T23:02:23.112620253Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbo
x-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"9e6aa8abdee4b6468466863ac7e3bb067937fde4779d3e50b22ac50eda1d662c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-t6d26_7dedf5d8-b6fd-49a9-9161-47d438f3fc4e","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-t6d26","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7dedf5d8-b6fd-49a9-9161-47d438f3fc4e"},"owner":"root"},{"ociVersion":"1.2.1","id":"b4896657616d8a5bf9cd25346f14a5f0022fe77640a87ffd0ad1d235c1b32b21","pid":1857,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4896657616d8a5bf9cd25346f14a5f0022fe77640a87ffd0ad1d235c1b32b21","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4896657616d8a5bf9cd25346f14a5f0022fe77640a87ffd0ad1d235c1b32b21/rootfs","created":"2025-12-07T23:02:23.419973264Z","annotations":{"io.kuber
netes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri.sandbox-id":"9e6aa8abdee4b6468466863ac7e3bb067937fde4779d3e50b22ac50eda1d662c","io.kubernetes.cri.sandbox-name":"kindnet-t6d26","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7dedf5d8-b6fd-49a9-9161-47d438f3fc4e"},"owner":"root"},{"ociVersion":"1.2.1","id":"bb6c22ba4f99b3a5e0ad4f225337ef6d9c678ad8f673d0daa3576f8291e9becb","pid":1327,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb6c22ba4f99b3a5e0ad4f225337ef6d9c678ad8f673d0daa3576f8291e9becb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb6c22ba4f99b3a5e0ad4f225337ef6d9c678ad8f673d0daa3576f8291e9becb/rootfs","created":"2025-12-07T23:02:13.001206777Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"reg
istry.k8s.io/etcd:3.6.5-0","io.kubernetes.cri.sandbox-id":"4d67e174203fb4a05d1fa7cfe38c061f5fbfd3315837894bc0495c0b90e42de3","io.kubernetes.cri.sandbox-name":"etcd-functional-372098","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f16c9eb383622fd7c3cc4026e467886d"},"owner":"root"},{"ociVersion":"1.2.1","id":"dea1d5f421a2ca57e38ad717c491f9798b3aa75ae01c507fbecc995f67bbd2ad","pid":1777,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dea1d5f421a2ca57e38ad717c491f9798b3aa75ae01c507fbecc995f67bbd2ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dea1d5f421a2ca57e38ad717c491f9798b3aa75ae01c507fbecc995f67bbd2ad/rootfs","created":"2025-12-07T23:02:23.069935523Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kub
ernetes.cri.sandbox-id":"dea1d5f421a2ca57e38ad717c491f9798b3aa75ae01c507fbecc995f67bbd2ad","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-rsjbv_52f5009c-b0c2-4c05-afea-472b842f9ef4","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-rsjbv","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"52f5009c-b0c2-4c05-afea-472b842f9ef4"},"owner":"root"},{"ociVersion":"1.2.1","id":"def0377c6f0d18e86ee38cf410ad1b34f32ba4994e7e4e73c8c80acfcd0f81fd","pid":1311,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/def0377c6f0d18e86ee38cf410ad1b34f32ba4994e7e4e73c8c80acfcd0f81fd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/def0377c6f0d18e86ee38cf410ad1b34f32ba4994e7e4e73c8c80acfcd0f81fd/rootfs","created":"2025-12-07T23:02:12.990380564Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"
registry.k8s.io/kube-scheduler:v1.35.0-beta.0","io.kubernetes.cri.sandbox-id":"0e45f2e1f526fc4330d5a6e32baad4f4a765eec2a59b2d2f06613b16514e34d5","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-372098","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"16d8879b80172ef0b147ae38507cc833"},"owner":"root"},{"ociVersion":"1.2.1","id":"e4204dc06ed5fc87b4fd716a7b639fe6349d52dc60d8e9c331ac181704dac256","pid":2179,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e4204dc06ed5fc87b4fd716a7b639fe6349d52dc60d8e9c331ac181704dac256","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e4204dc06ed5fc87b4fd716a7b639fe6349d52dc60d8e9c331ac181704dac256/rootfs","created":"2025-12-07T23:02:34.339302043Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.13.1","io.kubernetes.cri.sandbox-id":"887f2a72603086796877cfacc8122c9
8f12d6ab0fc7c64c5982089d8f7b4d520","io.kubernetes.cri.sandbox-name":"coredns-7d764666f9-k5tm5","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"804b8ebe-b61a-4547-874f-b95e8add978c"},"owner":"root"},{"ociVersion":"1.2.1","id":"f498869d5e01e30ecfd13baf20a1f09a3f465257ae71392b559601e8794de20d","pid":2120,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f498869d5e01e30ecfd13baf20a1f09a3f465257ae71392b559601e8794de20d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f498869d5e01e30ecfd13baf20a1f09a3f465257ae71392b559601e8794de20d/rootfs","created":"2025-12-07T23:02:34.243017159Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f498869d5e01e30ecfd13baf20a1f09a3f465257ae71392b559601e8794
de20d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_5a996c0b-b206-47a4-9c47-8f7137de1f1a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5a996c0b-b206-47a4-9c47-8f7137de1f1a"},"owner":"root"}]
	I1207 23:02:54.093948  444582 cri.go:126] list returned 16 containers
	I1207 23:02:54.093957  444582 cri.go:129] container: {ID:0e45f2e1f526fc4330d5a6e32baad4f4a765eec2a59b2d2f06613b16514e34d5 Status:running}
	I1207 23:02:54.093970  444582 cri.go:131] skipping 0e45f2e1f526fc4330d5a6e32baad4f4a765eec2a59b2d2f06613b16514e34d5 - not in ps
	I1207 23:02:54.093973  444582 cri.go:129] container: {ID:27c569bb3458ad14783a895d6a725d0680973c56f076d8afc2b12298f8f95ba4 Status:running}
	I1207 23:02:54.093978  444582 cri.go:135] skipping {27c569bb3458ad14783a895d6a725d0680973c56f076d8afc2b12298f8f95ba4 running}: state = "running", want "paused"
	I1207 23:02:54.093984  444582 cri.go:129] container: {ID:3bc2e449748ff3987c1f2e9b60c3d1d6d2e5bb8af52e8183c995815cf8009aaf Status:running}
	I1207 23:02:54.093989  444582 cri.go:131] skipping 3bc2e449748ff3987c1f2e9b60c3d1d6d2e5bb8af52e8183c995815cf8009aaf - not in ps
	I1207 23:02:54.093991  444582 cri.go:129] container: {ID:4d67e174203fb4a05d1fa7cfe38c061f5fbfd3315837894bc0495c0b90e42de3 Status:running}
	I1207 23:02:54.093995  444582 cri.go:131] skipping 4d67e174203fb4a05d1fa7cfe38c061f5fbfd3315837894bc0495c0b90e42de3 - not in ps
	I1207 23:02:54.093997  444582 cri.go:129] container: {ID:6926e6c223eb3fc773f1e27c851b360300fdb9df3cea5c69f432abdb539590ce Status:running}
	I1207 23:02:54.094001  444582 cri.go:135] skipping {6926e6c223eb3fc773f1e27c851b360300fdb9df3cea5c69f432abdb539590ce running}: state = "running", want "paused"
	I1207 23:02:54.094005  444582 cri.go:129] container: {ID:77c5419a5af32a3adbfdb08f245cbdfbee4d4594dda757a4a341591f0f5757ef Status:running}
	I1207 23:02:54.094009  444582 cri.go:135] skipping {77c5419a5af32a3adbfdb08f245cbdfbee4d4594dda757a4a341591f0f5757ef running}: state = "running", want "paused"
	I1207 23:02:54.094012  444582 cri.go:129] container: {ID:85198daee096f6391d2bd792d71032bf82d6ed222c13a319e974f1c71a5d6527 Status:running}
	I1207 23:02:54.094015  444582 cri.go:131] skipping 85198daee096f6391d2bd792d71032bf82d6ed222c13a319e974f1c71a5d6527 - not in ps
	I1207 23:02:54.094018  444582 cri.go:129] container: {ID:887f2a72603086796877cfacc8122c98f12d6ab0fc7c64c5982089d8f7b4d520 Status:running}
	I1207 23:02:54.094020  444582 cri.go:131] skipping 887f2a72603086796877cfacc8122c98f12d6ab0fc7c64c5982089d8f7b4d520 - not in ps
	I1207 23:02:54.094023  444582 cri.go:129] container: {ID:91c336f27dcdb0f7a126951cd6bcba9bcba4bc625d39ad62a68d1683a1d4327a Status:running}
	I1207 23:02:54.094026  444582 cri.go:135] skipping {91c336f27dcdb0f7a126951cd6bcba9bcba4bc625d39ad62a68d1683a1d4327a running}: state = "running", want "paused"
	I1207 23:02:54.094030  444582 cri.go:129] container: {ID:9e6aa8abdee4b6468466863ac7e3bb067937fde4779d3e50b22ac50eda1d662c Status:running}
	I1207 23:02:54.094033  444582 cri.go:131] skipping 9e6aa8abdee4b6468466863ac7e3bb067937fde4779d3e50b22ac50eda1d662c - not in ps
	I1207 23:02:54.094036  444582 cri.go:129] container: {ID:b4896657616d8a5bf9cd25346f14a5f0022fe77640a87ffd0ad1d235c1b32b21 Status:running}
	I1207 23:02:54.094039  444582 cri.go:135] skipping {b4896657616d8a5bf9cd25346f14a5f0022fe77640a87ffd0ad1d235c1b32b21 running}: state = "running", want "paused"
	I1207 23:02:54.094041  444582 cri.go:129] container: {ID:bb6c22ba4f99b3a5e0ad4f225337ef6d9c678ad8f673d0daa3576f8291e9becb Status:running}
	I1207 23:02:54.094044  444582 cri.go:135] skipping {bb6c22ba4f99b3a5e0ad4f225337ef6d9c678ad8f673d0daa3576f8291e9becb running}: state = "running", want "paused"
	I1207 23:02:54.094048  444582 cri.go:129] container: {ID:dea1d5f421a2ca57e38ad717c491f9798b3aa75ae01c507fbecc995f67bbd2ad Status:running}
	I1207 23:02:54.094051  444582 cri.go:131] skipping dea1d5f421a2ca57e38ad717c491f9798b3aa75ae01c507fbecc995f67bbd2ad - not in ps
	I1207 23:02:54.094054  444582 cri.go:129] container: {ID:def0377c6f0d18e86ee38cf410ad1b34f32ba4994e7e4e73c8c80acfcd0f81fd Status:running}
	I1207 23:02:54.094058  444582 cri.go:135] skipping {def0377c6f0d18e86ee38cf410ad1b34f32ba4994e7e4e73c8c80acfcd0f81fd running}: state = "running", want "paused"
	I1207 23:02:54.094061  444582 cri.go:129] container: {ID:e4204dc06ed5fc87b4fd716a7b639fe6349d52dc60d8e9c331ac181704dac256 Status:running}
	I1207 23:02:54.094065  444582 cri.go:135] skipping {e4204dc06ed5fc87b4fd716a7b639fe6349d52dc60d8e9c331ac181704dac256 running}: state = "running", want "paused"
	I1207 23:02:54.094067  444582 cri.go:129] container: {ID:f498869d5e01e30ecfd13baf20a1f09a3f465257ae71392b559601e8794de20d Status:running}
	I1207 23:02:54.094070  444582 cri.go:131] skipping f498869d5e01e30ecfd13baf20a1f09a3f465257ae71392b559601e8794de20d - not in ps
	I1207 23:02:54.094106  444582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:02:54.101671  444582 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1207 23:02:54.101678  444582 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1207 23:02:54.101734  444582 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 23:02:54.108810  444582 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:02:54.109243  444582 kubeconfig.go:125] found "functional-372098" server: "https://192.168.49.2:8441"
	I1207 23:02:54.110463  444582 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 23:02:54.117559  444582 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-07 23:02:10.491266468 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-07 23:02:53.341609103 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1207 23:02:54.117568  444582 kubeadm.go:1161] stopping kube-system containers ...
	I1207 23:02:54.117582  444582 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1207 23:02:54.117639  444582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:02:54.143513  444582 cri.go:89] found id: "e4204dc06ed5fc87b4fd716a7b639fe6349d52dc60d8e9c331ac181704dac256"
	I1207 23:02:54.143521  444582 cri.go:89] found id: "77c5419a5af32a3adbfdb08f245cbdfbee4d4594dda757a4a341591f0f5757ef"
	I1207 23:02:54.143523  444582 cri.go:89] found id: "b4896657616d8a5bf9cd25346f14a5f0022fe77640a87ffd0ad1d235c1b32b21"
	I1207 23:02:54.143559  444582 cri.go:89] found id: "91c336f27dcdb0f7a126951cd6bcba9bcba4bc625d39ad62a68d1683a1d4327a"
	I1207 23:02:54.143562  444582 cri.go:89] found id: "6926e6c223eb3fc773f1e27c851b360300fdb9df3cea5c69f432abdb539590ce"
	I1207 23:02:54.143564  444582 cri.go:89] found id: "27c569bb3458ad14783a895d6a725d0680973c56f076d8afc2b12298f8f95ba4"
	I1207 23:02:54.143566  444582 cri.go:89] found id: "bb6c22ba4f99b3a5e0ad4f225337ef6d9c678ad8f673d0daa3576f8291e9becb"
	I1207 23:02:54.143567  444582 cri.go:89] found id: "def0377c6f0d18e86ee38cf410ad1b34f32ba4994e7e4e73c8c80acfcd0f81fd"
	I1207 23:02:54.143569  444582 cri.go:89] found id: ""
	I1207 23:02:54.143573  444582 cri.go:252] Stopping containers: [e4204dc06ed5fc87b4fd716a7b639fe6349d52dc60d8e9c331ac181704dac256 77c5419a5af32a3adbfdb08f245cbdfbee4d4594dda757a4a341591f0f5757ef b4896657616d8a5bf9cd25346f14a5f0022fe77640a87ffd0ad1d235c1b32b21 91c336f27dcdb0f7a126951cd6bcba9bcba4bc625d39ad62a68d1683a1d4327a 6926e6c223eb3fc773f1e27c851b360300fdb9df3cea5c69f432abdb539590ce 27c569bb3458ad14783a895d6a725d0680973c56f076d8afc2b12298f8f95ba4 bb6c22ba4f99b3a5e0ad4f225337ef6d9c678ad8f673d0daa3576f8291e9becb def0377c6f0d18e86ee38cf410ad1b34f32ba4994e7e4e73c8c80acfcd0f81fd]
	I1207 23:02:54.143621  444582 ssh_runner.go:195] Run: which crictl
	I1207 23:02:54.146956  444582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 e4204dc06ed5fc87b4fd716a7b639fe6349d52dc60d8e9c331ac181704dac256 77c5419a5af32a3adbfdb08f245cbdfbee4d4594dda757a4a341591f0f5757ef b4896657616d8a5bf9cd25346f14a5f0022fe77640a87ffd0ad1d235c1b32b21 91c336f27dcdb0f7a126951cd6bcba9bcba4bc625d39ad62a68d1683a1d4327a 6926e6c223eb3fc773f1e27c851b360300fdb9df3cea5c69f432abdb539590ce 27c569bb3458ad14783a895d6a725d0680973c56f076d8afc2b12298f8f95ba4 bb6c22ba4f99b3a5e0ad4f225337ef6d9c678ad8f673d0daa3576f8291e9becb def0377c6f0d18e86ee38cf410ad1b34f32ba4994e7e4e73c8c80acfcd0f81fd
	I1207 23:03:09.652070  444582 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 e4204dc06ed5fc87b4fd716a7b639fe6349d52dc60d8e9c331ac181704dac256 77c5419a5af32a3adbfdb08f245cbdfbee4d4594dda757a4a341591f0f5757ef b4896657616d8a5bf9cd25346f14a5f0022fe77640a87ffd0ad1d235c1b32b21 91c336f27dcdb0f7a126951cd6bcba9bcba4bc625d39ad62a68d1683a1d4327a 6926e6c223eb3fc773f1e27c851b360300fdb9df3cea5c69f432abdb539590ce 27c569bb3458ad14783a895d6a725d0680973c56f076d8afc2b12298f8f95ba4 bb6c22ba4f99b3a5e0ad4f225337ef6d9c678ad8f673d0daa3576f8291e9becb def0377c6f0d18e86ee38cf410ad1b34f32ba4994e7e4e73c8c80acfcd0f81fd: (15.505076234s)
	I1207 23:03:09.652129  444582 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 23:03:09.694798  444582 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:03:09.703156  444582 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  7 23:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  7 23:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Dec  7 23:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  7 23:02 /etc/kubernetes/scheduler.conf
	
	I1207 23:03:09.703213  444582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1207 23:03:09.710969  444582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1207 23:03:09.718331  444582 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:03:09.718371  444582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:03:09.725315  444582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1207 23:03:09.732522  444582 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:03:09.732569  444582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:03:09.739420  444582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1207 23:03:09.746273  444582 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:03:09.746304  444582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:03:09.753162  444582 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:03:09.760307  444582 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 23:03:09.799559  444582 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 23:03:10.366510  444582 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 23:03:10.551290  444582 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 23:03:10.595069  444582 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 23:03:10.652189  444582 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:03:10.652271  444582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:03:11.152340  444582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:03:11.652390  444582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:03:11.666801  444582 api_server.go:72] duration metric: took 1.014623209s to wait for apiserver process to appear ...
	I1207 23:03:11.666818  444582 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:03:11.666855  444582 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1207 23:03:12.813376  444582 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 23:03:12.813396  444582 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 23:03:12.813411  444582 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1207 23:03:12.820097  444582 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 23:03:12.820116  444582 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 23:03:13.167613  444582 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1207 23:03:13.171592  444582 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:03:13.171612  444582 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:03:13.667220  444582 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1207 23:03:13.671545  444582 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:03:13.671564  444582 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:03:14.167832  444582 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1207 23:03:14.171886  444582 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1207 23:03:14.178392  444582 api_server.go:141] control plane version: v1.35.0-beta.0
	I1207 23:03:14.178407  444582 api_server.go:131] duration metric: took 2.511584288s to wait for apiserver health ...
	I1207 23:03:14.178416  444582 cni.go:84] Creating CNI manager for ""
	I1207 23:03:14.178421  444582 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1207 23:03:14.180256  444582 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1207 23:03:14.181609  444582 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1207 23:03:14.185851  444582 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1207 23:03:14.185859  444582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1207 23:03:14.198258  444582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 23:03:14.484474  444582 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:03:14.487793  444582 system_pods.go:59] 8 kube-system pods found
	I1207 23:03:14.487825  444582 system_pods.go:61] "coredns-7d764666f9-k5tm5" [804b8ebe-b61a-4547-874f-b95e8add978c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:03:14.487835  444582 system_pods.go:61] "etcd-functional-372098" [c22d8334-1c36-46bd-a1cf-edc3e9723cf9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:03:14.487840  444582 system_pods.go:61] "kindnet-t6d26" [7dedf5d8-b6fd-49a9-9161-47d438f3fc4e] Running
	I1207 23:03:14.487845  444582 system_pods.go:61] "kube-apiserver-functional-372098" [9c21d2b5-457d-4e18-950c-20b510a9f1ad] Pending
	I1207 23:03:14.487854  444582 system_pods.go:61] "kube-controller-manager-functional-372098" [4b163908-513d-4780-823b-566101d118c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:03:14.487857  444582 system_pods.go:61] "kube-proxy-rsjbv" [52f5009c-b0c2-4c05-afea-472b842f9ef4] Running
	I1207 23:03:14.487861  444582 system_pods.go:61] "kube-scheduler-functional-372098" [f0a35123-c6f2-44d2-8572-5acbba3e22cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:03:14.487871  444582 system_pods.go:61] "storage-provisioner" [5a996c0b-b206-47a4-9c47-8f7137de1f1a] Running
	I1207 23:03:14.487876  444582 system_pods.go:74] duration metric: took 3.390083ms to wait for pod list to return data ...
	I1207 23:03:14.487882  444582 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:03:14.490265  444582 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:03:14.490283  444582 node_conditions.go:123] node cpu capacity is 8
	I1207 23:03:14.490299  444582 node_conditions.go:105] duration metric: took 2.413015ms to run NodePressure ...
	I1207 23:03:14.490352  444582 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 23:03:14.734052  444582 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1207 23:03:14.737012  444582 kubeadm.go:744] kubelet initialised
	I1207 23:03:14.737026  444582 kubeadm.go:745] duration metric: took 2.957296ms waiting for restarted kubelet to initialise ...
	I1207 23:03:14.737043  444582 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 23:03:14.746572  444582 ops.go:34] apiserver oom_adj: -16
	I1207 23:03:14.746584  444582 kubeadm.go:602] duration metric: took 20.644899858s to restartPrimaryControlPlane
	I1207 23:03:14.746593  444582 kubeadm.go:403] duration metric: took 20.702739156s to StartCluster
	I1207 23:03:14.746611  444582 settings.go:142] acquiring lock: {Name:mk7216554e80629b12e2915c4ff3500dbf3bd589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:03:14.746671  444582 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 23:03:14.747243  444582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-372312/kubeconfig: {Name:mkfa57fc45c954633f03c454814d9663bdf56b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:03:14.747450  444582 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1207 23:03:14.747522  444582 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:03:14.747600  444582 addons.go:70] Setting storage-provisioner=true in profile "functional-372098"
	I1207 23:03:14.747614  444582 addons.go:239] Setting addon storage-provisioner=true in "functional-372098"
	W1207 23:03:14.747619  444582 addons.go:248] addon storage-provisioner should already be in state true
	I1207 23:03:14.747642  444582 host.go:66] Checking if "functional-372098" exists ...
	I1207 23:03:14.747635  444582 addons.go:70] Setting default-storageclass=true in profile "functional-372098"
	I1207 23:03:14.747656  444582 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-372098"
	I1207 23:03:14.747718  444582 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1207 23:03:14.747910  444582 cli_runner.go:164] Run: docker container inspect functional-372098 --format={{.State.Status}}
	I1207 23:03:14.748067  444582 cli_runner.go:164] Run: docker container inspect functional-372098 --format={{.State.Status}}
	I1207 23:03:14.752595  444582 out.go:179] * Verifying Kubernetes components...
	I1207 23:03:14.753971  444582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:03:14.768323  444582 addons.go:239] Setting addon default-storageclass=true in "functional-372098"
	W1207 23:03:14.768332  444582 addons.go:248] addon default-storageclass should already be in state true
	I1207 23:03:14.768351  444582 host.go:66] Checking if "functional-372098" exists ...
	I1207 23:03:14.768526  444582 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:03:14.768720  444582 cli_runner.go:164] Run: docker container inspect functional-372098 --format={{.State.Status}}
	I1207 23:03:14.773180  444582 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:03:14.773191  444582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 23:03:14.773243  444582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
	I1207 23:03:14.786493  444582 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 23:03:14.786505  444582 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 23:03:14.786590  444582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
	I1207 23:03:14.798018  444582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-372098/id_rsa Username:docker}
	I1207 23:03:14.807488  444582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-372098/id_rsa Username:docker}
	I1207 23:03:14.903638  444582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:03:14.904014  444582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:03:14.908838  444582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:03:14.916462  444582 node_ready.go:35] waiting up to 6m0s for node "functional-372098" to be "Ready" ...
	I1207 23:03:14.918923  444582 node_ready.go:49] node "functional-372098" is "Ready"
	I1207 23:03:14.918938  444582 node_ready.go:38] duration metric: took 2.456204ms for node "functional-372098" to be "Ready" ...
	I1207 23:03:14.918951  444582 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:03:14.918988  444582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:03:15.364169  444582 api_server.go:72] duration metric: took 616.695318ms to wait for apiserver process to appear ...
	I1207 23:03:15.364185  444582 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:03:15.364204  444582 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1207 23:03:15.369072  444582 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1207 23:03:15.369906  444582 api_server.go:141] control plane version: v1.35.0-beta.0
	I1207 23:03:15.369921  444582 api_server.go:131] duration metric: took 5.730791ms to wait for apiserver health ...
	I1207 23:03:15.369927  444582 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:03:15.371712  444582 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1207 23:03:15.372197  444582 system_pods.go:59] 8 kube-system pods found
	I1207 23:03:15.372217  444582 system_pods.go:61] "coredns-7d764666f9-k5tm5" [804b8ebe-b61a-4547-874f-b95e8add978c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:03:15.372227  444582 system_pods.go:61] "etcd-functional-372098" [c22d8334-1c36-46bd-a1cf-edc3e9723cf9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:03:15.372231  444582 system_pods.go:61] "kindnet-t6d26" [7dedf5d8-b6fd-49a9-9161-47d438f3fc4e] Running
	I1207 23:03:15.372234  444582 system_pods.go:61] "kube-apiserver-functional-372098" [9c21d2b5-457d-4e18-950c-20b510a9f1ad] Pending
	I1207 23:03:15.372239  444582 system_pods.go:61] "kube-controller-manager-functional-372098" [4b163908-513d-4780-823b-566101d118c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:03:15.372244  444582 system_pods.go:61] "kube-proxy-rsjbv" [52f5009c-b0c2-4c05-afea-472b842f9ef4] Running
	I1207 23:03:15.372248  444582 system_pods.go:61] "kube-scheduler-functional-372098" [f0a35123-c6f2-44d2-8572-5acbba3e22cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:03:15.372251  444582 system_pods.go:61] "storage-provisioner" [5a996c0b-b206-47a4-9c47-8f7137de1f1a] Running
	I1207 23:03:15.372255  444582 system_pods.go:74] duration metric: took 2.323923ms to wait for pod list to return data ...
	I1207 23:03:15.372261  444582 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:03:15.372871  444582 addons.go:530] duration metric: took 625.358993ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1207 23:03:15.374169  444582 default_sa.go:45] found service account: "default"
	I1207 23:03:15.374179  444582 default_sa.go:55] duration metric: took 1.914051ms for default service account to be created ...
	I1207 23:03:15.374184  444582 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:03:15.376288  444582 system_pods.go:86] 8 kube-system pods found
	I1207 23:03:15.376303  444582 system_pods.go:89] "coredns-7d764666f9-k5tm5" [804b8ebe-b61a-4547-874f-b95e8add978c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:03:15.376309  444582 system_pods.go:89] "etcd-functional-372098" [c22d8334-1c36-46bd-a1cf-edc3e9723cf9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:03:15.376313  444582 system_pods.go:89] "kindnet-t6d26" [7dedf5d8-b6fd-49a9-9161-47d438f3fc4e] Running
	I1207 23:03:15.376316  444582 system_pods.go:89] "kube-apiserver-functional-372098" [9c21d2b5-457d-4e18-950c-20b510a9f1ad] Pending
	I1207 23:03:15.376321  444582 system_pods.go:89] "kube-controller-manager-functional-372098" [4b163908-513d-4780-823b-566101d118c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:03:15.376326  444582 system_pods.go:89] "kube-proxy-rsjbv" [52f5009c-b0c2-4c05-afea-472b842f9ef4] Running
	I1207 23:03:15.376330  444582 system_pods.go:89] "kube-scheduler-functional-372098" [f0a35123-c6f2-44d2-8572-5acbba3e22cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:03:15.376332  444582 system_pods.go:89] "storage-provisioner" [5a996c0b-b206-47a4-9c47-8f7137de1f1a] Running
	I1207 23:03:15.376360  444582 retry.go:31] will retry after 276.720773ms: missing components: kube-apiserver
	I1207 23:03:15.656147  444582 system_pods.go:86] 8 kube-system pods found
	I1207 23:03:15.656171  444582 system_pods.go:89] "coredns-7d764666f9-k5tm5" [804b8ebe-b61a-4547-874f-b95e8add978c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:03:15.656181  444582 system_pods.go:89] "etcd-functional-372098" [c22d8334-1c36-46bd-a1cf-edc3e9723cf9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:03:15.656187  444582 system_pods.go:89] "kindnet-t6d26" [7dedf5d8-b6fd-49a9-9161-47d438f3fc4e] Running
	I1207 23:03:15.656192  444582 system_pods.go:89] "kube-apiserver-functional-372098" [9c21d2b5-457d-4e18-950c-20b510a9f1ad] Pending
	I1207 23:03:15.656198  444582 system_pods.go:89] "kube-controller-manager-functional-372098" [4b163908-513d-4780-823b-566101d118c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:03:15.656200  444582 system_pods.go:89] "kube-proxy-rsjbv" [52f5009c-b0c2-4c05-afea-472b842f9ef4] Running
	I1207 23:03:15.656206  444582 system_pods.go:89] "kube-scheduler-functional-372098" [f0a35123-c6f2-44d2-8572-5acbba3e22cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:03:15.656210  444582 system_pods.go:89] "storage-provisioner" [5a996c0b-b206-47a4-9c47-8f7137de1f1a] Running
	I1207 23:03:15.656228  444582 retry.go:31] will retry after 360.950386ms: missing components: kube-apiserver
	I1207 23:03:16.020363  444582 system_pods.go:86] 8 kube-system pods found
	I1207 23:03:16.020381  444582 system_pods.go:89] "coredns-7d764666f9-k5tm5" [804b8ebe-b61a-4547-874f-b95e8add978c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:03:16.020387  444582 system_pods.go:89] "etcd-functional-372098" [c22d8334-1c36-46bd-a1cf-edc3e9723cf9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:03:16.020393  444582 system_pods.go:89] "kindnet-t6d26" [7dedf5d8-b6fd-49a9-9161-47d438f3fc4e] Running
	I1207 23:03:16.020396  444582 system_pods.go:89] "kube-apiserver-functional-372098" [9c21d2b5-457d-4e18-950c-20b510a9f1ad] Pending
	I1207 23:03:16.020404  444582 system_pods.go:89] "kube-controller-manager-functional-372098" [4b163908-513d-4780-823b-566101d118c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:03:16.020406  444582 system_pods.go:89] "kube-proxy-rsjbv" [52f5009c-b0c2-4c05-afea-472b842f9ef4] Running
	I1207 23:03:16.020411  444582 system_pods.go:89] "kube-scheduler-functional-372098" [f0a35123-c6f2-44d2-8572-5acbba3e22cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:03:16.020413  444582 system_pods.go:89] "storage-provisioner" [5a996c0b-b206-47a4-9c47-8f7137de1f1a] Running
	I1207 23:03:16.020428  444582 retry.go:31] will retry after 455.059527ms: missing components: kube-apiserver
	I1207 23:03:16.479026  444582 system_pods.go:86] 8 kube-system pods found
	I1207 23:03:16.479045  444582 system_pods.go:89] "coredns-7d764666f9-k5tm5" [804b8ebe-b61a-4547-874f-b95e8add978c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:03:16.479052  444582 system_pods.go:89] "etcd-functional-372098" [c22d8334-1c36-46bd-a1cf-edc3e9723cf9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:03:16.479058  444582 system_pods.go:89] "kindnet-t6d26" [7dedf5d8-b6fd-49a9-9161-47d438f3fc4e] Running
	I1207 23:03:16.479061  444582 system_pods.go:89] "kube-apiserver-functional-372098" [9c21d2b5-457d-4e18-950c-20b510a9f1ad] Pending
	I1207 23:03:16.479069  444582 system_pods.go:89] "kube-controller-manager-functional-372098" [4b163908-513d-4780-823b-566101d118c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:03:16.479072  444582 system_pods.go:89] "kube-proxy-rsjbv" [52f5009c-b0c2-4c05-afea-472b842f9ef4] Running
	I1207 23:03:16.479076  444582 system_pods.go:89] "kube-scheduler-functional-372098" [f0a35123-c6f2-44d2-8572-5acbba3e22cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:03:16.479079  444582 system_pods.go:89] "storage-provisioner" [5a996c0b-b206-47a4-9c47-8f7137de1f1a] Running
	I1207 23:03:16.479093  444582 retry.go:31] will retry after 417.266979ms: missing components: kube-apiserver
	I1207 23:03:16.899968  444582 system_pods.go:86] 8 kube-system pods found
	I1207 23:03:16.899994  444582 system_pods.go:89] "coredns-7d764666f9-k5tm5" [804b8ebe-b61a-4547-874f-b95e8add978c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:03:16.900004  444582 system_pods.go:89] "etcd-functional-372098" [c22d8334-1c36-46bd-a1cf-edc3e9723cf9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:03:16.900010  444582 system_pods.go:89] "kindnet-t6d26" [7dedf5d8-b6fd-49a9-9161-47d438f3fc4e] Running
	I1207 23:03:16.900015  444582 system_pods.go:89] "kube-apiserver-functional-372098" [9c21d2b5-457d-4e18-950c-20b510a9f1ad] Pending
	I1207 23:03:16.900022  444582 system_pods.go:89] "kube-controller-manager-functional-372098" [4b163908-513d-4780-823b-566101d118c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:03:16.900026  444582 system_pods.go:89] "kube-proxy-rsjbv" [52f5009c-b0c2-4c05-afea-472b842f9ef4] Running
	I1207 23:03:16.900033  444582 system_pods.go:89] "kube-scheduler-functional-372098" [f0a35123-c6f2-44d2-8572-5acbba3e22cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:03:16.900038  444582 system_pods.go:89] "storage-provisioner" [5a996c0b-b206-47a4-9c47-8f7137de1f1a] Running
	I1207 23:03:16.900055  444582 retry.go:31] will retry after 559.823206ms: missing components: kube-apiserver
	I1207 23:03:17.463621  444582 system_pods.go:86] 8 kube-system pods found
	I1207 23:03:17.463641  444582 system_pods.go:89] "coredns-7d764666f9-k5tm5" [804b8ebe-b61a-4547-874f-b95e8add978c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:03:17.463651  444582 system_pods.go:89] "etcd-functional-372098" [c22d8334-1c36-46bd-a1cf-edc3e9723cf9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:03:17.463656  444582 system_pods.go:89] "kindnet-t6d26" [7dedf5d8-b6fd-49a9-9161-47d438f3fc4e] Running
	I1207 23:03:17.463665  444582 system_pods.go:89] "kube-apiserver-functional-372098" [9c21d2b5-457d-4e18-950c-20b510a9f1ad] Pending
	I1207 23:03:17.463669  444582 system_pods.go:89] "kube-controller-manager-functional-372098" [4b163908-513d-4780-823b-566101d118c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:03:17.463672  444582 system_pods.go:89] "kube-proxy-rsjbv" [52f5009c-b0c2-4c05-afea-472b842f9ef4] Running
	I1207 23:03:17.463678  444582 system_pods.go:89] "kube-scheduler-functional-372098" [f0a35123-c6f2-44d2-8572-5acbba3e22cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:03:17.463680  444582 system_pods.go:89] "storage-provisioner" [5a996c0b-b206-47a4-9c47-8f7137de1f1a] Running
	I1207 23:03:17.463720  444582 retry.go:31] will retry after 908.504454ms: missing components: kube-apiserver
	I1207 23:03:18.375906  444582 system_pods.go:86] 8 kube-system pods found
	I1207 23:03:18.375925  444582 system_pods.go:89] "coredns-7d764666f9-k5tm5" [804b8ebe-b61a-4547-874f-b95e8add978c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:03:18.375931  444582 system_pods.go:89] "etcd-functional-372098" [c22d8334-1c36-46bd-a1cf-edc3e9723cf9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:03:18.375935  444582 system_pods.go:89] "kindnet-t6d26" [7dedf5d8-b6fd-49a9-9161-47d438f3fc4e] Running
	I1207 23:03:18.375939  444582 system_pods.go:89] "kube-apiserver-functional-372098" [9c21d2b5-457d-4e18-950c-20b510a9f1ad] Pending
	I1207 23:03:18.375943  444582 system_pods.go:89] "kube-controller-manager-functional-372098" [4b163908-513d-4780-823b-566101d118c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:03:18.375946  444582 system_pods.go:89] "kube-proxy-rsjbv" [52f5009c-b0c2-4c05-afea-472b842f9ef4] Running
	I1207 23:03:18.375950  444582 system_pods.go:89] "kube-scheduler-functional-372098" [f0a35123-c6f2-44d2-8572-5acbba3e22cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:03:18.375966  444582 system_pods.go:89] "storage-provisioner" [5a996c0b-b206-47a4-9c47-8f7137de1f1a] Running
	I1207 23:03:18.375980  444582 retry.go:31] will retry after 1.106032876s: missing components: kube-apiserver
	I1207 23:03:19.485412  444582 system_pods.go:86] 8 kube-system pods found
	I1207 23:03:19.485431  444582 system_pods.go:89] "coredns-7d764666f9-k5tm5" [804b8ebe-b61a-4547-874f-b95e8add978c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:03:19.485436  444582 system_pods.go:89] "etcd-functional-372098" [c22d8334-1c36-46bd-a1cf-edc3e9723cf9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:03:19.485440  444582 system_pods.go:89] "kindnet-t6d26" [7dedf5d8-b6fd-49a9-9161-47d438f3fc4e] Running
	I1207 23:03:19.485443  444582 system_pods.go:89] "kube-apiserver-functional-372098" [9c21d2b5-457d-4e18-950c-20b510a9f1ad] Pending
	I1207 23:03:19.485448  444582 system_pods.go:89] "kube-controller-manager-functional-372098" [4b163908-513d-4780-823b-566101d118c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:03:19.485450  444582 system_pods.go:89] "kube-proxy-rsjbv" [52f5009c-b0c2-4c05-afea-472b842f9ef4] Running
	I1207 23:03:19.485459  444582 system_pods.go:89] "kube-scheduler-functional-372098" [f0a35123-c6f2-44d2-8572-5acbba3e22cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:03:19.485462  444582 system_pods.go:89] "storage-provisioner" [5a996c0b-b206-47a4-9c47-8f7137de1f1a] Running
	I1207 23:03:19.485476  444582 retry.go:31] will retry after 1.223643802s: missing components: kube-apiserver
	I1207 23:03:20.713306  444582 system_pods.go:86] 8 kube-system pods found
	I1207 23:03:20.713325  444582 system_pods.go:89] "coredns-7d764666f9-k5tm5" [804b8ebe-b61a-4547-874f-b95e8add978c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:03:20.713331  444582 system_pods.go:89] "etcd-functional-372098" [c22d8334-1c36-46bd-a1cf-edc3e9723cf9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:03:20.713341  444582 system_pods.go:89] "kindnet-t6d26" [7dedf5d8-b6fd-49a9-9161-47d438f3fc4e] Running
	I1207 23:03:20.713345  444582 system_pods.go:89] "kube-apiserver-functional-372098" [9c21d2b5-457d-4e18-950c-20b510a9f1ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:03:20.713350  444582 system_pods.go:89] "kube-controller-manager-functional-372098" [4b163908-513d-4780-823b-566101d118c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:03:20.713372  444582 system_pods.go:89] "kube-proxy-rsjbv" [52f5009c-b0c2-4c05-afea-472b842f9ef4] Running
	I1207 23:03:20.713382  444582 system_pods.go:89] "kube-scheduler-functional-372098" [f0a35123-c6f2-44d2-8572-5acbba3e22cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:03:20.713384  444582 system_pods.go:89] "storage-provisioner" [5a996c0b-b206-47a4-9c47-8f7137de1f1a] Running
	I1207 23:03:20.713391  444582 system_pods.go:126] duration metric: took 5.339202676s to wait for k8s-apps to be running ...
	I1207 23:03:20.713396  444582 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:03:20.713438  444582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:03:20.725814  444582 system_svc.go:56] duration metric: took 12.405449ms WaitForService to wait for kubelet
	I1207 23:03:20.725833  444582 kubeadm.go:587] duration metric: took 5.978362511s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:03:20.725855  444582 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:03:20.728250  444582 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:03:20.728261  444582 node_conditions.go:123] node cpu capacity is 8
	I1207 23:03:20.728273  444582 node_conditions.go:105] duration metric: took 2.414667ms to run NodePressure ...
	I1207 23:03:20.728283  444582 start.go:242] waiting for startup goroutines ...
	I1207 23:03:20.728289  444582 start.go:247] waiting for cluster config update ...
	I1207 23:03:20.728298  444582 start.go:256] writing updated cluster config ...
	I1207 23:03:20.728531  444582 ssh_runner.go:195] Run: rm -f paused
	I1207 23:03:20.731981  444582 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:03:20.734494  444582 pod_ready.go:83] waiting for pod "coredns-7d764666f9-k5tm5" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:03:22.739657  444582 pod_ready.go:104] pod "coredns-7d764666f9-k5tm5" is not "Ready", error: <nil>
	W1207 23:03:25.239744  444582 pod_ready.go:104] pod "coredns-7d764666f9-k5tm5" is not "Ready", error: <nil>
	I1207 23:03:26.738949  444582 pod_ready.go:94] pod "coredns-7d764666f9-k5tm5" is "Ready"
	I1207 23:03:26.738967  444582 pod_ready.go:86] duration metric: took 6.004463208s for pod "coredns-7d764666f9-k5tm5" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:03:26.741216  444582 pod_ready.go:83] waiting for pod "etcd-functional-372098" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:03:27.745757  444582 pod_ready.go:94] pod "etcd-functional-372098" is "Ready"
	I1207 23:03:27.745776  444582 pod_ready.go:86] duration metric: took 1.00454756s for pod "etcd-functional-372098" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:03:27.747598  444582 pod_ready.go:83] waiting for pod "kube-apiserver-functional-372098" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:03:29.253424  444582 pod_ready.go:94] pod "kube-apiserver-functional-372098" is "Ready"
	I1207 23:03:29.253447  444582 pod_ready.go:86] duration metric: took 1.505836629s for pod "kube-apiserver-functional-372098" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:03:29.255724  444582 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-372098" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:03:29.259361  444582 pod_ready.go:94] pod "kube-controller-manager-functional-372098" is "Ready"
	I1207 23:03:29.259374  444582 pod_ready.go:86] duration metric: took 3.63851ms for pod "kube-controller-manager-functional-372098" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:03:29.261297  444582 pod_ready.go:83] waiting for pod "kube-proxy-rsjbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:03:29.538325  444582 pod_ready.go:94] pod "kube-proxy-rsjbv" is "Ready"
	I1207 23:03:29.538341  444582 pod_ready.go:86] duration metric: took 277.033119ms for pod "kube-proxy-rsjbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:03:29.738383  444582 pod_ready.go:83] waiting for pod "kube-scheduler-functional-372098" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:03:30.138008  444582 pod_ready.go:94] pod "kube-scheduler-functional-372098" is "Ready"
	I1207 23:03:30.138026  444582 pod_ready.go:86] duration metric: took 399.62582ms for pod "kube-scheduler-functional-372098" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:03:30.138035  444582 pod_ready.go:40] duration metric: took 9.406025382s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:03:30.180877  444582 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1207 23:03:30.182614  444582 out.go:179] * Done! kubectl is now configured to use "functional-372098" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2ccf118549ae3       aa9d02839d8de       6 minutes ago       Running             kube-apiserver            0                   72a2bc1d4f365       kube-apiserver-functional-372098            kube-system
	ef3b33e5c5d94       45f3cc72d235f       6 minutes ago       Running             kube-controller-manager   1                   85198daee096f       kube-controller-manager-functional-372098   kube-system
	0c06ca4e4069e       a3e246e9556e9       6 minutes ago       Running             etcd                      1                   4d67e174203fb       etcd-functional-372098                      kube-system
	67503af5d77f5       7bb6219ddab95       6 minutes ago       Running             kube-scheduler            1                   0e45f2e1f526f       kube-scheduler-functional-372098            kube-system
	10581c840670f       8a4ded35a3eb1       6 minutes ago       Running             kube-proxy                1                   dea1d5f421a2c       kube-proxy-rsjbv                            kube-system
	5cf5db2404f73       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   9e6aa8abdee4b       kindnet-t6d26                               kube-system
	bf437fd6022b3       6e38f40d628db       6 minutes ago       Running             storage-provisioner       1                   f498869d5e01e       storage-provisioner                         kube-system
	d8af143545632       aa5e3ebc0dfed       6 minutes ago       Running             coredns                   1                   887f2a7260308       coredns-7d764666f9-k5tm5                    kube-system
	e4204dc06ed5f       aa5e3ebc0dfed       7 minutes ago       Exited              coredns                   0                   887f2a7260308       coredns-7d764666f9-k5tm5                    kube-system
	77c5419a5af32       6e38f40d628db       7 minutes ago       Exited              storage-provisioner       0                   f498869d5e01e       storage-provisioner                         kube-system
	b4896657616d8       409467f978b4a       7 minutes ago       Exited              kindnet-cni               0                   9e6aa8abdee4b       kindnet-t6d26                               kube-system
	91c336f27dcdb       8a4ded35a3eb1       7 minutes ago       Exited              kube-proxy                0                   dea1d5f421a2c       kube-proxy-rsjbv                            kube-system
	27c569bb3458a       45f3cc72d235f       7 minutes ago       Exited              kube-controller-manager   0                   85198daee096f       kube-controller-manager-functional-372098   kube-system
	bb6c22ba4f99b       a3e246e9556e9       7 minutes ago       Exited              etcd                      0                   4d67e174203fb       etcd-functional-372098                      kube-system
	def0377c6f0d1       7bb6219ddab95       7 minutes ago       Exited              kube-scheduler            0                   0e45f2e1f526f       kube-scheduler-functional-372098            kube-system
	
	
	==> containerd <==
	Dec 07 23:09:31 functional-372098 containerd[3768]: time="2025-12-07T23:09:31.391168907Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a996c0b_b206_47a4_9c47_8f7137de1f1a.slice/cri-containerd-bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f.scope/hugetlb.1GB.events\""
	Dec 07 23:09:31 functional-372098 containerd[3768]: time="2025-12-07T23:09:31.391949068Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1042ef16f3b4ac85da716dadc81299.slice/cri-containerd-ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640.scope/hugetlb.2MB.events\""
	Dec 07 23:09:31 functional-372098 containerd[3768]: time="2025-12-07T23:09:31.392052544Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1042ef16f3b4ac85da716dadc81299.slice/cri-containerd-ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640.scope/hugetlb.1GB.events\""
	Dec 07 23:09:32 functional-372098 containerd[3768]: time="2025-12-07T23:09:32.635541327Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Dec 07 23:09:33 functional-372098 containerd[3768]: time="2025-12-07T23:09:33.557037570Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 23:09:33 functional-372098 containerd[3768]: time="2025-12-07T23:09:33.557089877Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Dec 07 23:09:34 functional-372098 containerd[3768]: time="2025-12-07T23:09:34.635303727Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Dec 07 23:09:35 functional-372098 containerd[3768]: time="2025-12-07T23:09:35.531970757Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 23:09:35 functional-372098 containerd[3768]: time="2025-12-07T23:09:35.532007308Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10998"
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.404215990Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52f5009c_b0c2_4c05_afea_472b842f9ef4.slice/cri-containerd-10581c840670f49559436a75d55800e9640286d59204ff2ba47af75f05b78f72.scope/hugetlb.2MB.events\""
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.404338519Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52f5009c_b0c2_4c05_afea_472b842f9ef4.slice/cri-containerd-10581c840670f49559436a75d55800e9640286d59204ff2ba47af75f05b78f72.scope/hugetlb.1GB.events\""
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.405062822Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804b8ebe_b61a_4547_874f_b95e8add978c.slice/cri-containerd-d8af143545632d45bdb3a5db702a65d9012e593185181be61a7118fe16665a1c.scope/hugetlb.2MB.events\""
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.405172759Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804b8ebe_b61a_4547_874f_b95e8add978c.slice/cri-containerd-d8af143545632d45bdb3a5db702a65d9012e593185181be61a7118fe16665a1c.scope/hugetlb.1GB.events\""
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.405818792Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d8879b80172ef0b147ae38507cc833.slice/cri-containerd-67503af5d77f52fbbfacae3ab1b91c97dd4f963ee89dfadeae82d59c1895e8e6.scope/hugetlb.2MB.events\""
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.405893849Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d8879b80172ef0b147ae38507cc833.slice/cri-containerd-67503af5d77f52fbbfacae3ab1b91c97dd4f963ee89dfadeae82d59c1895e8e6.scope/hugetlb.1GB.events\""
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.406526830Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf16c9eb383622fd7c3cc4026e467886d.slice/cri-containerd-0c06ca4e4069ec6b9caba6080fd1a53cf6b2f4856d57231a8ad4d6568836cee9.scope/hugetlb.2MB.events\""
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.406622986Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf16c9eb383622fd7c3cc4026e467886d.slice/cri-containerd-0c06ca4e4069ec6b9caba6080fd1a53cf6b2f4856d57231a8ad4d6568836cee9.scope/hugetlb.1GB.events\""
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.407268614Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a996c0b_b206_47a4_9c47_8f7137de1f1a.slice/cri-containerd-bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f.scope/hugetlb.2MB.events\""
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.407361605Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a996c0b_b206_47a4_9c47_8f7137de1f1a.slice/cri-containerd-bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f.scope/hugetlb.1GB.events\""
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.408036886Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1042ef16f3b4ac85da716dadc81299.slice/cri-containerd-ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640.scope/hugetlb.2MB.events\""
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.408109192Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1042ef16f3b4ac85da716dadc81299.slice/cri-containerd-ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640.scope/hugetlb.1GB.events\""
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.408732761Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod7dedf5d8_b6fd_49a9_9161_47d438f3fc4e.slice/cri-containerd-5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f.scope/hugetlb.2MB.events\""
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.408812835Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod7dedf5d8_b6fd_49a9_9161_47d438f3fc4e.slice/cri-containerd-5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f.scope/hugetlb.1GB.events\""
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.409614461Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc277aa187e9a67d3af88edb77e7859f.slice/cri-containerd-2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889.scope/hugetlb.2MB.events\""
	Dec 07 23:09:41 functional-372098 containerd[3768]: time="2025-12-07T23:09:41.409758164Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc277aa187e9a67d3af88edb77e7859f.slice/cri-containerd-2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889.scope/hugetlb.1GB.events\""
	
	
	==> coredns [d8af143545632d45bdb3a5db702a65d9012e593185181be61a7118fe16665a1c] <==
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:42358 - 39813 "HINFO IN 2893170015885620547.3879998501473278557. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.879961652s
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> coredns [e4204dc06ed5fc87b4fd716a7b639fe6349d52dc60d8e9c331ac181704dac256] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44361 - 37708 "HINFO IN 8252574301961499388.8195395475701870576. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.085806688s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-372098
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-372098
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=functional-372098
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_02_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:02:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-372098
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:09:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:08:38 +0000   Sun, 07 Dec 2025 23:02:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:08:38 +0000   Sun, 07 Dec 2025 23:02:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:08:38 +0000   Sun, 07 Dec 2025 23:02:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:08:38 +0000   Sun, 07 Dec 2025 23:02:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-372098
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                2f15a4e4-5557-42d7-a12d-8f37ddc06bdc
	  Boot ID:                    60b18ead-4d86-44c7-8f9b-425eebb82278
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-h9ctn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  default                     hello-node-connect-9f67c86d4-rt6h6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  default                     mysql-844cf969f6-swxtx                       600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     6m12s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-7d764666f9-k5tm5                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m28s
	  kube-system                 etcd-functional-372098                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m33s
	  kube-system                 kindnet-t6d26                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m28s
	  kube-system                 kube-apiserver-functional-372098             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-controller-manager-functional-372098    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 kube-proxy-rsjbv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-scheduler-functional-372098             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  7m29s  node-controller  Node functional-372098 event: Registered Node functional-372098 in Controller
	  Normal  RegisteredNode  6m34s  node-controller  Node functional-372098 event: Registered Node functional-372098 in Controller
	
	
	==> dmesg <==
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 21:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 9d 47 68 73 f8 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 51 85 40 99 2c 08 06
	[Dec 7 22:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +0.003775] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 fe e4 4c 66 38 08 06
	[ +24.208165] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	[Dec 7 22:26] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 26 10 e0 03 10 08 06
	[  +0.000351] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +6.093825] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 95 3b ef b3 e0 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e ac 25 ad 6e 00 08 06
	[ +21.488023] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e e7 41 9a 21 38 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	
	
	==> etcd [0c06ca4e4069ec6b9caba6080fd1a53cf6b2f4856d57231a8ad4d6568836cee9] <==
	{"level":"warn","ts":"2025-12-07T23:03:12.237829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.243608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.249882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.258230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.264336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.270498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.277120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.291838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.298227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.304563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.312856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.318729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.325848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.331520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.344456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.351231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.357336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.363314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.369310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.375148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.388450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.395023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.401166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.407249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.457756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59714","server-name":"","error":"EOF"}
	
	
	==> etcd [bb6c22ba4f99b3a5e0ad4f225337ef6d9c678ad8f673d0daa3576f8291e9becb] <==
	{"level":"warn","ts":"2025-12-07T23:02:14.265025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.271024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.288678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.294504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.300407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.306862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.353907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55492","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T23:03:09.553511Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-07T23:03:09.553622Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-372098","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-07T23:03:09.553773Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T23:03:09.555329Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T23:03:09.555392Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T23:03:09.555422Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-07T23:03:09.555453Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-07T23:03:09.555451Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555458Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555508Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-07T23:03:09.555518Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555498Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555542Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-07T23:03:09.555560Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T23:03:09.557307Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-07T23:03:09.557363Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T23:03:09.557392Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-07T23:03:09.557400Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-372098","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 23:09:50 up  1:52,  0 user,  load average: 0.01, 0.18, 0.41
	Linux functional-372098 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f] <==
	I1207 23:07:40.806925       1 main.go:301] handling current node
	I1207 23:07:50.807910       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:07:50.807941       1 main.go:301] handling current node
	I1207 23:08:00.813287       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:08:00.813327       1 main.go:301] handling current node
	I1207 23:08:10.811821       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:08:10.811872       1 main.go:301] handling current node
	I1207 23:08:20.808359       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:08:20.808417       1 main.go:301] handling current node
	I1207 23:08:30.812278       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:08:30.812320       1 main.go:301] handling current node
	I1207 23:08:40.807748       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:08:40.807784       1 main.go:301] handling current node
	I1207 23:08:50.808067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:08:50.808098       1 main.go:301] handling current node
	I1207 23:09:00.807310       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:09:00.807342       1 main.go:301] handling current node
	I1207 23:09:10.815087       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:09:10.815117       1 main.go:301] handling current node
	I1207 23:09:20.807329       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:09:20.807360       1 main.go:301] handling current node
	I1207 23:09:30.807557       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:09:30.807588       1 main.go:301] handling current node
	I1207 23:09:40.806779       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:09:40.806834       1 main.go:301] handling current node
	
	
	==> kindnet [b4896657616d8a5bf9cd25346f14a5f0022fe77640a87ffd0ad1d235c1b32b21] <==
	I1207 23:02:23.611610       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:02:23.611876       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1207 23:02:23.612007       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:02:23.612023       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:02:23.612042       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:02:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:02:23.812318       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:02:23.812611       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:02:23.812779       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:02:23.813113       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:02:24.205586       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:02:24.205618       1 metrics.go:72] Registering metrics
	I1207 23:02:24.205710       1 controller.go:711] "Syncing nftables rules"
	I1207 23:02:33.816182       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:02:33.816279       1 main.go:301] handling current node
	I1207 23:02:43.816778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:02:43.816822       1 main.go:301] handling current node
	I1207 23:02:53.814190       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:02:53.814234       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889] <==
	I1207 23:03:12.895252       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:12.895274       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1207 23:03:12.895098       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 23:03:12.895419       1 aggregator.go:187] initial CRD sync complete...
	I1207 23:03:12.895428       1 autoregister_controller.go:144] Starting autoregister controller
	I1207 23:03:12.895433       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 23:03:12.895439       1 cache.go:39] Caches are synced for autoregister controller
	I1207 23:03:12.900034       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 23:03:12.916725       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:03:12.917953       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:03:13.621946       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:03:13.798835       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1207 23:03:14.002955       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1207 23:03:14.004054       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:03:14.008020       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:03:14.479237       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 23:03:14.565406       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:03:14.609604       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:03:14.614746       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:03:26.657356       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:03:32.687174       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.66.15"}
	I1207 23:03:38.179099       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.192.100"}
	I1207 23:03:38.781988       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.9.0"}
	I1207 23:03:39.826372       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.28.61"}
	I1207 23:09:18.560342       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.155.3"}
	
	
	==> kube-controller-manager [27c569bb3458ad14783a895d6a725d0680973c56f076d8afc2b12298f8f95ba4] <==
	I1207 23:02:21.521818       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522069       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522103       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522229       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522340       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522614       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522650       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522682       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522732       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522819       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.523110       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.523382       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.523713       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.524650       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.524812       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.524910       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.525007       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.526069       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:02:21.528785       1 range_allocator.go:433] "Set node PodCIDR" node="functional-372098" podCIDRs=["10.244.0.0/24"]
	I1207 23:02:21.535609       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.622638       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.622653       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 23:02:21.622657       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 23:02:21.627127       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:36.682963       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-controller-manager [ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640] <==
	I1207 23:03:16.029037       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.029249       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.029281       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.029872       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.030043       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.030401       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.031183       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:03:16.031625       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.031943       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.032375       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033656       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033712       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033727       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033905       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033735       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034026       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034591       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034672       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034741       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.036367       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.038000       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.128372       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.128388       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 23:03:16.128393       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 23:03:16.132205       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [10581c840670f49559436a75d55800e9640286d59204ff2ba47af75f05b78f72] <==
	I1207 23:03:00.432540       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:03:00.497444       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:03:20.097782       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:20.097817       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 23:03:20.097903       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:03:20.118707       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:03:20.118766       1 server_linux.go:136] "Using iptables Proxier"
	I1207 23:03:20.124259       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:03:20.124567       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 23:03:20.124584       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:03:20.126112       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:03:20.126122       1 config.go:200] "Starting service config controller"
	I1207 23:03:20.126148       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:03:20.126133       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:03:20.126225       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:03:20.126240       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:03:20.126247       1 config.go:309] "Starting node config controller"
	I1207 23:03:20.126584       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:03:20.126592       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:03:20.226321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 23:03:20.226350       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:03:20.226821       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [91c336f27dcdb0f7a126951cd6bcba9bcba4bc625d39ad62a68d1683a1d4327a] <==
	I1207 23:02:23.207160       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:02:23.278330       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:02:23.378882       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:23.378921       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 23:02:23.379062       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:02:23.398024       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:02:23.398074       1 server_linux.go:136] "Using iptables Proxier"
	I1207 23:02:23.403163       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:02:23.403501       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 23:02:23.403527       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:02:23.405445       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:02:23.405465       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:02:23.405483       1 config.go:200] "Starting service config controller"
	I1207 23:02:23.405488       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:02:23.405499       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:02:23.405503       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:02:23.405547       1 config.go:309] "Starting node config controller"
	I1207 23:02:23.405552       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:02:23.505835       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:02:23.505858       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:02:23.505870       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 23:02:23.505883       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [67503af5d77f52fbbfacae3ab1b91c97dd4f963ee89dfadeae82d59c1895e8e6] <==
	I1207 23:03:11.407436       1 serving.go:386] Generated self-signed cert in-memory
	W1207 23:03:12.809581       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 23:03:12.809621       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 23:03:12.809633       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 23:03:12.809642       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 23:03:12.836375       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 23:03:12.836486       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:03:12.840380       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:03:12.840479       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:03:12.840534       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 23:03:12.840628       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 23:03:12.941464       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [def0377c6f0d18e86ee38cf410ad1b34f32ba4994e7e4e73c8c80acfcd0f81fd] <==
	E1207 23:02:15.676375       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:02:15.677255       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1207 23:02:15.702232       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1207 23:02:15.703102       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1207 23:02:15.772992       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:02:15.773938       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1207 23:02:15.776967       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1207 23:02:15.777880       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1207 23:02:15.808060       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1207 23:02:15.808975       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1207 23:02:15.828950       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1207 23:02:15.829754       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1207 23:02:15.847911       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1207 23:02:15.850936       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1207 23:02:15.898865       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:02:15.899689       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1207 23:02:15.951270       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1207 23:02:15.952209       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	I1207 23:02:17.937339       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:09.602953       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1207 23:03:09.603068       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1207 23:03:09.603098       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1207 23:03:09.603105       1 server.go:265] "[graceful-termination] secure server is exiting"
	I1207 23:03:09.603097       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1207 23:03:09.603128       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 07 23:09:33 functional-372098 kubelet[4673]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 23:09:33 functional-372098 kubelet[4673]:  > image="kicbase/echo-server:latest"
	Dec 07 23:09:33 functional-372098 kubelet[4673]: E1207 23:09:33.557536    4673 kuberuntime_manager.go:1664] "Unhandled Error" err=<
	Dec 07 23:09:33 functional-372098 kubelet[4673]:         container echo-server start failed in pod hello-node-connect-9f67c86d4-rt6h6_default(4120483f-8517-4e8a-87e9-d3efcbee6356): ErrImagePull: failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	Dec 07 23:09:33 functional-372098 kubelet[4673]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 23:09:33 functional-372098 kubelet[4673]:  > logger="UnhandledError"
	Dec 07 23:09:33 functional-372098 kubelet[4673]: E1207 23:09:33.557571    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-rt6h6" podUID="4120483f-8517-4e8a-87e9-d3efcbee6356"
	Dec 07 23:09:35 functional-372098 kubelet[4673]: E1207 23:09:35.532258    4673 log.go:32] "PullImage from image service failed" err=<
	Dec 07 23:09:35 functional-372098 kubelet[4673]:         rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	Dec 07 23:09:35 functional-372098 kubelet[4673]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 23:09:35 functional-372098 kubelet[4673]:  > image="kicbase/echo-server:latest"
	Dec 07 23:09:35 functional-372098 kubelet[4673]: E1207 23:09:35.532312    4673 kuberuntime_image.go:43] "Failed to pull image" err=<
	Dec 07 23:09:35 functional-372098 kubelet[4673]:         failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	Dec 07 23:09:35 functional-372098 kubelet[4673]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 23:09:35 functional-372098 kubelet[4673]:  > image="kicbase/echo-server:latest"
	Dec 07 23:09:35 functional-372098 kubelet[4673]: E1207 23:09:35.532538    4673 kuberuntime_manager.go:1664] "Unhandled Error" err=<
	Dec 07 23:09:35 functional-372098 kubelet[4673]:         container echo-server start failed in pod hello-node-5758569b79-h9ctn_default(407fccce-e612-4fd6-9dea-5557b2a23bcb): ErrImagePull: failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	Dec 07 23:09:35 functional-372098 kubelet[4673]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 07 23:09:35 functional-372098 kubelet[4673]:  > logger="UnhandledError"
	Dec 07 23:09:35 functional-372098 kubelet[4673]: E1207 23:09:35.532589    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-h9ctn" podUID="407fccce-e612-4fd6-9dea-5557b2a23bcb"
	Dec 07 23:09:36 functional-372098 kubelet[4673]: E1207 23:09:36.636358    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="07af374e-0a50-41d2-a6ec-bc5002beba0e"
	Dec 07 23:09:40 functional-372098 kubelet[4673]: E1207 23:09:40.635980    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-swxtx" podUID="40f4d455-e467-4886-9c6d-8ee0145d3d14"
	Dec 07 23:09:41 functional-372098 kubelet[4673]: E1207 23:09:41.634658    4673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-372098" containerName="kube-controller-manager"
	Dec 07 23:09:41 functional-372098 kubelet[4673]: E1207 23:09:41.635225    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="95c0efdc-87b6-4bd8-aef3-566618b1523e"
	Dec 07 23:09:45 functional-372098 kubelet[4673]: E1207 23:09:45.635403    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-rt6h6" podUID="4120483f-8517-4e8a-87e9-d3efcbee6356"
	
	
	==> storage-provisioner [77c5419a5af32a3adbfdb08f245cbdfbee4d4594dda757a4a341591f0f5757ef] <==
	I1207 23:02:34.473761       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-372098_2723da14-e3ab-463a-b591-bd211f668a0a!
	W1207 23:02:36.381294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:36.384835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:38.387295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:38.390656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:40.394077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:40.397975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:42.400909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:42.405558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:44.409032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:44.412633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:46.416254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:46.420950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:48.424007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:48.428978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:50.431882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:50.436004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:52.438706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:52.442377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:54.445344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:54.450611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:56.453177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:56.457032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:58.459733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:58.463288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f] <==
	W1207 23:09:26.044536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:28.046999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:28.050539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:30.052686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:30.056354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:32.059345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:32.063120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:34.065565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:34.068959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:36.072080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:36.077127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:38.079762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:38.083415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:40.086109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:40.089997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:42.093542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:42.097152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:44.099716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:44.103270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:46.106290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:46.109845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:48.112427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:48.116996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:50.120425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:09:50.124789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-372098 -n functional-372098
helpers_test.go:269: (dbg) Run:  kubectl --context functional-372098 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-5758569b79-h9ctn hello-node-connect-9f67c86d4-rt6h6 mysql-844cf969f6-swxtx nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-372098 describe pod hello-node-5758569b79-h9ctn hello-node-connect-9f67c86d4-rt6h6 mysql-844cf969f6-swxtx nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-372098 describe pod hello-node-5758569b79-h9ctn hello-node-connect-9f67c86d4-rt6h6 mysql-844cf969f6-swxtx nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-5758569b79-h9ctn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:38 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bq8h8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bq8h8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m13s                 default-scheduler  Successfully assigned default/hello-node-5758569b79-h9ctn to functional-372098
	  Normal   Pulling    3m6s (x5 over 6m12s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m4s (x5 over 6m11s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   3m4s (x5 over 6m11s)  kubelet  Error: ErrImagePull
	  Warning  Failed   65s (x20 over 6m11s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  52s (x21 over 6m11s)  kubelet  Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-rt6h6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:09:18 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tsmdx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tsmdx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  33s                default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-rt6h6 to functional-372098
	  Normal   Pulling    19s (x2 over 33s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     18s (x2 over 32s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   18s (x2 over 32s)  kubelet  Error: ErrImagePull
	  Normal   BackOff  6s (x2 over 31s)   kubelet  Back-off pulling image "kicbase/echo-server"
	  Warning  Failed   6s (x2 over 31s)   kubelet  Error: ImagePullBackOff
	
	
	Name:             mysql-844cf969f6-swxtx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:38 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2n2f6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2n2f6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m13s                  default-scheduler  Successfully assigned default/mysql-844cf969f6-swxtx to functional-372098
	  Warning  Failed     4m48s (x3 over 5m57s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  3m26s (x5 over 6m13s)  kubelet  Pulling image "docker.io/mysql:5.7"
	  Warning  Failed   3m25s (x2 over 6m12s)  kubelet  Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   3m25s (x5 over 6m12s)  kubelet  Error: ErrImagePull
	  Warning  Failed   60s (x20 over 6m11s)   kubelet  Error: ImagePullBackOff
	  Normal   BackOff  49s (x21 over 6m11s)   kubelet  Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:39 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8cjdf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8cjdf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m12s                  default-scheduler  Successfully assigned default/nginx-svc to functional-372098
	  Normal   Pulling    3m24s (x5 over 6m11s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m23s (x5 over 6m10s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   3m23s (x5 over 6m10s)  kubelet  Error: ErrImagePull
	  Warning  Failed   65s (x20 over 6m10s)   kubelet  Error: ImagePullBackOff
	  Normal   BackOff  53s (x21 over 6m10s)   kubelet  Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:48 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b56ct (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-b56ct:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-372098
	  Normal   Pulling    3m6s (x5 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m5s (x5 over 6m1s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   3m5s (x5 over 6m1s)  kubelet  Error: ErrImagePull
	  Normal   BackOff  57s (x21 over 6m1s)  kubelet  Back-off pulling image "docker.io/nginx"
	  Warning  Failed   57s (x21 over 6m1s)  kubelet  Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (367.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-372098 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-swxtx" [40f4d455-e467-4886-9c6d-8ee0145d3d14] Pending
helpers_test.go:352: "mysql-844cf969f6-swxtx" [40f4d455-e467-4886-9c6d-8ee0145d3d14] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-372098 -n functional-372098
functional_test.go:1804: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: showing logs for failed pods as of 2025-12-07 23:13:38.52697794 +0000 UTC m=+2558.679644015
functional_test.go:1804: (dbg) Run:  kubectl --context functional-372098 describe po mysql-844cf969f6-swxtx -n default
functional_test.go:1804: (dbg) kubectl --context functional-372098 describe po mysql-844cf969f6-swxtx -n default:
Name:             mysql-844cf969f6-swxtx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-372098/192.168.49.2
Start Time:       Sun, 07 Dec 2025 23:03:38 +0000
Labels:           app=mysql
pod-template-hash=844cf969f6
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/mysql-844cf969f6
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2n2f6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2n2f6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-844cf969f6-swxtx to functional-372098
Warning  Failed     8m35s (x3 over 9m44s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling  7m13s (x5 over 10m)    kubelet  Pulling image "docker.io/mysql:5.7"
Warning  Failed   7m12s (x2 over 9m59s)  kubelet  Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   7m12s (x5 over 9m59s)   kubelet  Error: ErrImagePull
Warning  Failed   4m47s (x20 over 9m58s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m36s (x21 over 9m58s)  kubelet  Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-372098 logs mysql-844cf969f6-swxtx -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-372098 logs mysql-844cf969f6-swxtx -n default: exit status 1 (64.643454ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-844cf969f6-swxtx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-372098 logs mysql-844cf969f6-swxtx -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-372098
helpers_test.go:243: (dbg) docker inspect functional-372098:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0",
	        "Created": "2025-12-07T23:02:06.580313062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 439416,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:02:06.615285569Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/hostname",
	        "HostsPath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/hosts",
	        "LogPath": "/var/lib/docker/containers/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0/24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0-json.log",
	        "Name": "/functional-372098",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-372098:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-372098",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24bb4d0c1a9501548656232dddf4f4427c7d849bffbfae916abe02719fb09ba0",
	                "LowerDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff-init/diff:/var/lib/docker/overlay2/3cf0cc8fa8911fedbf601d360c4398569b0f30cd394076654dad88848c4dafaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8cdef81a9e2b8026bbb09923c6fe3053f2a29e3d327294472179fe2c4fcc19ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-372098",
	                "Source": "/var/lib/docker/volumes/functional-372098/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-372098",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-372098",
	                "name.minikube.sigs.k8s.io": "functional-372098",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a2fa3a3be74c47da6ccc47461bbc5b06579e767397576b1af8ed9fd637c04ccf",
	            "SandboxKey": "/var/run/docker/netns/a2fa3a3be74c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-372098": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f9ed146fed112ca9fd37afbd507e6575957a4ed48d7066777c5186f06129931c",
	                    "EndpointID": "acda0ebebf0dc04b91546f7aa782adebebea917dc5ceb5777c345b196684711a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "16:b1:7a:3d:6a:a7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-372098",
	                        "24bb4d0c1a95"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-372098 -n functional-372098
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-372098 logs -n 25: (1.212545643s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-372098 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh            │ functional-372098 ssh -- ls -la /mount-9p                                                                                                           │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh            │ functional-372098 ssh sudo umount -f /mount-9p                                                                                                      │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ ssh            │ functional-372098 ssh findmnt -T /mount1                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ mount          │ -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2517702564/001:/mount3 --alsologtostderr -v=1                │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ mount          │ -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2517702564/001:/mount2 --alsologtostderr -v=1                │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ mount          │ -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2517702564/001:/mount1 --alsologtostderr -v=1                │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ ssh            │ functional-372098 ssh findmnt -T /mount1                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ ssh            │ functional-372098 ssh findmnt -T /mount2                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ ssh            │ functional-372098 ssh findmnt -T /mount3                                                                                                            │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ mount          │ -p functional-372098 --kill=true                                                                                                                    │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ start          │ -p functional-372098 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ start          │ -p functional-372098 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ start          │ -p functional-372098 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-372098 --alsologtostderr -v=1                                                                                      │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ update-context │ functional-372098 update-context --alsologtostderr -v=2                                                                                             │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ update-context │ functional-372098 update-context --alsologtostderr -v=2                                                                                             │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ update-context │ functional-372098 update-context --alsologtostderr -v=2                                                                                             │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ image          │ functional-372098 image ls --format short --alsologtostderr                                                                                         │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ image          │ functional-372098 image ls --format yaml --alsologtostderr                                                                                          │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ ssh            │ functional-372098 ssh pgrep buildkitd                                                                                                               │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │                     │
	│ image          │ functional-372098 image build -t localhost/my-image:functional-372098 testdata/build --alsologtostderr                                              │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ image          │ functional-372098 image ls                                                                                                                          │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ image          │ functional-372098 image ls --format json --alsologtostderr                                                                                          │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	│ image          │ functional-372098 image ls --format table --alsologtostderr                                                                                         │ functional-372098 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:10 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:10:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:10:02.675202  457901 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:10:02.675303  457901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:10:02.675311  457901 out.go:374] Setting ErrFile to fd 2...
	I1207 23:10:02.675315  457901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:10:02.675574  457901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 23:10:02.675985  457901 out.go:368] Setting JSON to false
	I1207 23:10:02.676932  457901 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6752,"bootTime":1765142251,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:10:02.676985  457901 start.go:143] virtualization: kvm guest
	I1207 23:10:02.678751  457901 out.go:179] * [functional-372098] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1207 23:10:02.679921  457901 notify.go:221] Checking for updates...
	I1207 23:10:02.679967  457901 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:10:02.681267  457901 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:10:02.682437  457901 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 23:10:02.683434  457901 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 23:10:02.684504  457901 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:10:02.685516  457901 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:10:02.687010  457901 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1207 23:10:02.687493  457901 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:10:02.711459  457901 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:10:02.711573  457901 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:10:02.764536  457901 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-07 23:10:02.755142825 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:10:02.764648  457901 docker.go:319] overlay module found
	I1207 23:10:02.766932  457901 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1207 23:10:02.767937  457901 start.go:309] selected driver: docker
	I1207 23:10:02.767949  457901 start.go:927] validating driver "docker" against &{Name:functional-372098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-372098 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:10:02.768026  457901 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:10:02.769497  457901 out.go:203] 
	W1207 23:10:02.770495  457901 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1207 23:10:02.771545  457901 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e14f9112ffcff       56cc512116c8f       3 minutes ago       Exited              mount-munger              0                   fe5ce75149fc9       busybox-mount                               default
	2ccf118549ae3       aa9d02839d8de       10 minutes ago      Running             kube-apiserver            0                   72a2bc1d4f365       kube-apiserver-functional-372098            kube-system
	ef3b33e5c5d94       45f3cc72d235f       10 minutes ago      Running             kube-controller-manager   1                   85198daee096f       kube-controller-manager-functional-372098   kube-system
	0c06ca4e4069e       a3e246e9556e9       10 minutes ago      Running             etcd                      1                   4d67e174203fb       etcd-functional-372098                      kube-system
	67503af5d77f5       7bb6219ddab95       10 minutes ago      Running             kube-scheduler            1                   0e45f2e1f526f       kube-scheduler-functional-372098            kube-system
	10581c840670f       8a4ded35a3eb1       10 minutes ago      Running             kube-proxy                1                   dea1d5f421a2c       kube-proxy-rsjbv                            kube-system
	5cf5db2404f73       409467f978b4a       10 minutes ago      Running             kindnet-cni               1                   9e6aa8abdee4b       kindnet-t6d26                               kube-system
	bf437fd6022b3       6e38f40d628db       10 minutes ago      Running             storage-provisioner       1                   f498869d5e01e       storage-provisioner                         kube-system
	d8af143545632       aa5e3ebc0dfed       10 minutes ago      Running             coredns                   1                   887f2a7260308       coredns-7d764666f9-k5tm5                    kube-system
	e4204dc06ed5f       aa5e3ebc0dfed       11 minutes ago      Exited              coredns                   0                   887f2a7260308       coredns-7d764666f9-k5tm5                    kube-system
	77c5419a5af32       6e38f40d628db       11 minutes ago      Exited              storage-provisioner       0                   f498869d5e01e       storage-provisioner                         kube-system
	b4896657616d8       409467f978b4a       11 minutes ago      Exited              kindnet-cni               0                   9e6aa8abdee4b       kindnet-t6d26                               kube-system
	91c336f27dcdb       8a4ded35a3eb1       11 minutes ago      Exited              kube-proxy                0                   dea1d5f421a2c       kube-proxy-rsjbv                            kube-system
	27c569bb3458a       45f3cc72d235f       11 minutes ago      Exited              kube-controller-manager   0                   85198daee096f       kube-controller-manager-functional-372098   kube-system
	bb6c22ba4f99b       a3e246e9556e9       11 minutes ago      Exited              etcd                      0                   4d67e174203fb       etcd-functional-372098                      kube-system
	def0377c6f0d1       7bb6219ddab95       11 minutes ago      Exited              kube-scheduler            0                   0e45f2e1f526f       kube-scheduler-functional-372098            kube-system
	
	
	==> containerd <==
	Dec 07 23:13:21 functional-372098 containerd[3768]: time="2025-12-07T23:13:21.851470286Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf16c9eb383622fd7c3cc4026e467886d.slice/cri-containerd-0c06ca4e4069ec6b9caba6080fd1a53cf6b2f4856d57231a8ad4d6568836cee9.scope/hugetlb.1GB.events\""
	Dec 07 23:13:21 functional-372098 containerd[3768]: time="2025-12-07T23:13:21.852336951Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a996c0b_b206_47a4_9c47_8f7137de1f1a.slice/cri-containerd-bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f.scope/hugetlb.2MB.events\""
	Dec 07 23:13:21 functional-372098 containerd[3768]: time="2025-12-07T23:13:21.852443030Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a996c0b_b206_47a4_9c47_8f7137de1f1a.slice/cri-containerd-bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f.scope/hugetlb.1GB.events\""
	Dec 07 23:13:21 functional-372098 containerd[3768]: time="2025-12-07T23:13:21.853307346Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1042ef16f3b4ac85da716dadc81299.slice/cri-containerd-ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640.scope/hugetlb.2MB.events\""
	Dec 07 23:13:21 functional-372098 containerd[3768]: time="2025-12-07T23:13:21.853430682Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1042ef16f3b4ac85da716dadc81299.slice/cri-containerd-ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640.scope/hugetlb.1GB.events\""
	Dec 07 23:13:21 functional-372098 containerd[3768]: time="2025-12-07T23:13:21.854249447Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod7dedf5d8_b6fd_49a9_9161_47d438f3fc4e.slice/cri-containerd-5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f.scope/hugetlb.2MB.events\""
	Dec 07 23:13:21 functional-372098 containerd[3768]: time="2025-12-07T23:13:21.854351424Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod7dedf5d8_b6fd_49a9_9161_47d438f3fc4e.slice/cri-containerd-5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f.scope/hugetlb.1GB.events\""
	Dec 07 23:13:21 functional-372098 containerd[3768]: time="2025-12-07T23:13:21.854976591Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc277aa187e9a67d3af88edb77e7859f.slice/cri-containerd-2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889.scope/hugetlb.2MB.events\""
	Dec 07 23:13:21 functional-372098 containerd[3768]: time="2025-12-07T23:13:21.855048027Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc277aa187e9a67d3af88edb77e7859f.slice/cri-containerd-2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889.scope/hugetlb.1GB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.870888084Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod7dedf5d8_b6fd_49a9_9161_47d438f3fc4e.slice/cri-containerd-5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f.scope/hugetlb.2MB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.870987788Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod7dedf5d8_b6fd_49a9_9161_47d438f3fc4e.slice/cri-containerd-5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f.scope/hugetlb.1GB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.871753301Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc277aa187e9a67d3af88edb77e7859f.slice/cri-containerd-2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889.scope/hugetlb.2MB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.871858129Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc277aa187e9a67d3af88edb77e7859f.slice/cri-containerd-2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889.scope/hugetlb.1GB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.872579515Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52f5009c_b0c2_4c05_afea_472b842f9ef4.slice/cri-containerd-10581c840670f49559436a75d55800e9640286d59204ff2ba47af75f05b78f72.scope/hugetlb.2MB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.872716571Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52f5009c_b0c2_4c05_afea_472b842f9ef4.slice/cri-containerd-10581c840670f49559436a75d55800e9640286d59204ff2ba47af75f05b78f72.scope/hugetlb.1GB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.873398521Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804b8ebe_b61a_4547_874f_b95e8add978c.slice/cri-containerd-d8af143545632d45bdb3a5db702a65d9012e593185181be61a7118fe16665a1c.scope/hugetlb.2MB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.873480288Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804b8ebe_b61a_4547_874f_b95e8add978c.slice/cri-containerd-d8af143545632d45bdb3a5db702a65d9012e593185181be61a7118fe16665a1c.scope/hugetlb.1GB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.874119648Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d8879b80172ef0b147ae38507cc833.slice/cri-containerd-67503af5d77f52fbbfacae3ab1b91c97dd4f963ee89dfadeae82d59c1895e8e6.scope/hugetlb.2MB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.874217651Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d8879b80172ef0b147ae38507cc833.slice/cri-containerd-67503af5d77f52fbbfacae3ab1b91c97dd4f963ee89dfadeae82d59c1895e8e6.scope/hugetlb.1GB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.875001837Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf16c9eb383622fd7c3cc4026e467886d.slice/cri-containerd-0c06ca4e4069ec6b9caba6080fd1a53cf6b2f4856d57231a8ad4d6568836cee9.scope/hugetlb.2MB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.875107050Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf16c9eb383622fd7c3cc4026e467886d.slice/cri-containerd-0c06ca4e4069ec6b9caba6080fd1a53cf6b2f4856d57231a8ad4d6568836cee9.scope/hugetlb.1GB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.875950955Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a996c0b_b206_47a4_9c47_8f7137de1f1a.slice/cri-containerd-bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f.scope/hugetlb.2MB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.876049400Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a996c0b_b206_47a4_9c47_8f7137de1f1a.slice/cri-containerd-bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f.scope/hugetlb.1GB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.876640327Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1042ef16f3b4ac85da716dadc81299.slice/cri-containerd-ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640.scope/hugetlb.2MB.events\""
	Dec 07 23:13:31 functional-372098 containerd[3768]: time="2025-12-07T23:13:31.876734513Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1042ef16f3b4ac85da716dadc81299.slice/cri-containerd-ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640.scope/hugetlb.1GB.events\""
	
	
	==> coredns [d8af143545632d45bdb3a5db702a65d9012e593185181be61a7118fe16665a1c] <==
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:42358 - 39813 "HINFO IN 2893170015885620547.3879998501473278557. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.879961652s
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> coredns [e4204dc06ed5fc87b4fd716a7b639fe6349d52dc60d8e9c331ac181704dac256] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:44361 - 37708 "HINFO IN 8252574301961499388.8195395475701870576. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.085806688s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-372098
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-372098
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=functional-372098
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_02_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:02:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-372098
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:13:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:10:19 +0000   Sun, 07 Dec 2025 23:02:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:10:19 +0000   Sun, 07 Dec 2025 23:02:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:10:19 +0000   Sun, 07 Dec 2025 23:02:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:10:19 +0000   Sun, 07 Dec 2025 23:02:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-372098
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                2f15a4e4-5557-42d7-a12d-8f37ddc06bdc
	  Boot ID:                    60b18ead-4d86-44c7-8f9b-425eebb82278
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-h9ctn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-rt6h6            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  default                     mysql-844cf969f6-swxtx                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 coredns-7d764666f9-k5tm5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-372098                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-t6d26                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-372098              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-372098     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-rsjbv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-372098              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-zbt2f    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-nghff          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  11m   node-controller  Node functional-372098 event: Registered Node functional-372098 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-372098 event: Registered Node functional-372098 in Controller
	
	
	==> dmesg <==
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 21:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 72 e0 b5 03 dc 1a 06 a4 e6 fc 03 74 08 00
	[Dec 7 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 9d 47 68 73 f8 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a 51 85 40 99 2c 08 06
	[Dec 7 22:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +0.003775] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 fe e4 4c 66 38 08 06
	[ +24.208165] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	[Dec 7 22:26] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 26 10 e0 03 10 08 06
	[  +0.000351] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 19 be 0c 82 68 08 06
	[  +6.093825] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 95 3b ef b3 e0 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e ac 25 ad 6e 00 08 06
	[ +21.488023] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e e7 41 9a 21 38 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 12 50 8f 29 c2 08 06
	
	
	==> etcd [0c06ca4e4069ec6b9caba6080fd1a53cf6b2f4856d57231a8ad4d6568836cee9] <==
	{"level":"warn","ts":"2025-12-07T23:03:12.258230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.264336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.270498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.277120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.291838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.298227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.304563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.312856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.318729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.325848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.331520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.344456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.351231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.357336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.363314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.369310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.375148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.388450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.395023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.401166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.407249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:03:12.457756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59714","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T23:13:11.999861Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1099}
	{"level":"info","ts":"2025-12-07T23:13:12.018777Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1099,"took":"18.599612ms","hash":2532742970,"current-db-size-bytes":3858432,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":1929216,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-12-07T23:13:12.018820Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2532742970,"revision":1099,"compact-revision":-1}
	
	
	==> etcd [bb6c22ba4f99b3a5e0ad4f225337ef6d9c678ad8f673d0daa3576f8291e9becb] <==
	{"level":"warn","ts":"2025-12-07T23:02:14.265025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.271024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.288678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.294504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.300407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.306862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:02:14.353907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55492","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T23:03:09.553511Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-07T23:03:09.553622Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-372098","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-07T23:03:09.553773Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T23:03:09.555329Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-07T23:03:09.555392Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T23:03:09.555422Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-07T23:03:09.555453Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-07T23:03:09.555451Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555458Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555508Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-07T23:03:09.555518Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555498Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-07T23:03:09.555542Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-07T23:03:09.555560Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T23:03:09.557307Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-07T23:03:09.557363Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-07T23:03:09.557392Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-07T23:03:09.557400Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-372098","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 23:13:39 up  1:56,  0 user,  load average: 0.35, 0.25, 0.38
	Linux functional-372098 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5cf5db2404f7395a1a732376619ff2daf3e0ced9747819ed67cb6d91f1a8ef1f] <==
	I1207 23:11:30.815561       1 main.go:301] handling current node
	I1207 23:11:40.808499       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:11:40.808535       1 main.go:301] handling current node
	I1207 23:11:50.808209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:11:50.808258       1 main.go:301] handling current node
	I1207 23:12:00.806741       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:12:00.806796       1 main.go:301] handling current node
	I1207 23:12:10.815568       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:12:10.815611       1 main.go:301] handling current node
	I1207 23:12:20.808368       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:12:20.808405       1 main.go:301] handling current node
	I1207 23:12:30.808164       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:12:30.808197       1 main.go:301] handling current node
	I1207 23:12:40.807162       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:12:40.807200       1 main.go:301] handling current node
	I1207 23:12:50.808508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:12:50.808541       1 main.go:301] handling current node
	I1207 23:13:00.807102       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:13:00.807133       1 main.go:301] handling current node
	I1207 23:13:10.807411       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:13:10.807442       1 main.go:301] handling current node
	I1207 23:13:20.807422       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:13:20.807460       1 main.go:301] handling current node
	I1207 23:13:30.815524       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:13:30.815555       1 main.go:301] handling current node
	
	
	==> kindnet [b4896657616d8a5bf9cd25346f14a5f0022fe77640a87ffd0ad1d235c1b32b21] <==
	I1207 23:02:23.611610       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:02:23.611876       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1207 23:02:23.612007       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:02:23.612023       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:02:23.612042       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:02:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:02:23.812318       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:02:23.812611       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:02:23.812779       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:02:23.813113       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:02:24.205586       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:02:24.205618       1 metrics.go:72] Registering metrics
	I1207 23:02:24.205710       1 controller.go:711] "Syncing nftables rules"
	I1207 23:02:33.816182       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:02:33.816279       1 main.go:301] handling current node
	I1207 23:02:43.816778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:02:43.816822       1 main.go:301] handling current node
	I1207 23:02:53.814190       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:02:53.814234       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2ccf118549ae3560f3dbff2f8fcbc0f09d31d0aa0642ee875f2fb7d6d8a37889] <==
	I1207 23:03:12.895428       1 autoregister_controller.go:144] Starting autoregister controller
	I1207 23:03:12.895433       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 23:03:12.895439       1 cache.go:39] Caches are synced for autoregister controller
	I1207 23:03:12.900034       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 23:03:12.916725       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:03:12.917953       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:03:13.621946       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:03:13.798835       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1207 23:03:14.002955       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1207 23:03:14.004054       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:03:14.008020       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:03:14.479237       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 23:03:14.565406       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:03:14.609604       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:03:14.614746       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:03:26.657356       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:03:32.687174       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.66.15"}
	I1207 23:03:38.179099       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.192.100"}
	I1207 23:03:38.781988       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.9.0"}
	I1207 23:03:39.826372       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.28.61"}
	I1207 23:09:18.560342       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.155.3"}
	I1207 23:10:03.625772       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 23:10:03.720562       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.212.114"}
	I1207 23:10:03.731953       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.202.202"}
	I1207 23:13:12.837591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [27c569bb3458ad14783a895d6a725d0680973c56f076d8afc2b12298f8f95ba4] <==
	I1207 23:02:21.521818       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522069       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522103       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522229       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522340       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522614       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522650       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522682       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522732       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.522819       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.523110       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.523382       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.523713       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.524650       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.524812       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.524910       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.525007       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.526069       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:02:21.528785       1 range_allocator.go:433] "Set node PodCIDR" node="functional-372098" podCIDRs=["10.244.0.0/24"]
	I1207 23:02:21.535609       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.622638       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:21.622653       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 23:02:21.622657       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 23:02:21.627127       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:36.682963       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-controller-manager [ef3b33e5c5d94dd398ff11c7a7b5c08dda5e6db1fb86d0b06d262a8a1606d640] <==
	I1207 23:03:16.031183       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:03:16.031625       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.031943       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.032375       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033656       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033712       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033727       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033905       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.033735       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034026       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034591       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034672       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.034741       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.036367       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.038000       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.128372       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:16.128388       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 23:03:16.128393       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 23:03:16.132205       1 shared_informer.go:377] "Caches are synced"
	E1207 23:10:03.668806       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 23:10:03.672253       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 23:10:03.675555       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 23:10:03.680369       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 23:10:03.680408       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 23:10:03.684339       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [10581c840670f49559436a75d55800e9640286d59204ff2ba47af75f05b78f72] <==
	I1207 23:03:00.432540       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:03:00.497444       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:03:20.097782       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:20.097817       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 23:03:20.097903       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:03:20.118707       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:03:20.118766       1 server_linux.go:136] "Using iptables Proxier"
	I1207 23:03:20.124259       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:03:20.124567       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 23:03:20.124584       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:03:20.126112       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:03:20.126122       1 config.go:200] "Starting service config controller"
	I1207 23:03:20.126148       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:03:20.126133       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:03:20.126225       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:03:20.126240       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:03:20.126247       1 config.go:309] "Starting node config controller"
	I1207 23:03:20.126584       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:03:20.126592       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:03:20.226321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 23:03:20.226350       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:03:20.226821       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [91c336f27dcdb0f7a126951cd6bcba9bcba4bc625d39ad62a68d1683a1d4327a] <==
	I1207 23:02:23.207160       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:02:23.278330       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:02:23.378882       1 shared_informer.go:377] "Caches are synced"
	I1207 23:02:23.378921       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 23:02:23.379062       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:02:23.398024       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:02:23.398074       1 server_linux.go:136] "Using iptables Proxier"
	I1207 23:02:23.403163       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:02:23.403501       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 23:02:23.403527       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:02:23.405445       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:02:23.405465       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:02:23.405483       1 config.go:200] "Starting service config controller"
	I1207 23:02:23.405488       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:02:23.405499       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:02:23.405503       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:02:23.405547       1 config.go:309] "Starting node config controller"
	I1207 23:02:23.405552       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:02:23.505835       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:02:23.505858       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:02:23.505870       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 23:02:23.505883       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [67503af5d77f52fbbfacae3ab1b91c97dd4f963ee89dfadeae82d59c1895e8e6] <==
	I1207 23:03:11.407436       1 serving.go:386] Generated self-signed cert in-memory
	W1207 23:03:12.809581       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 23:03:12.809621       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 23:03:12.809633       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 23:03:12.809642       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 23:03:12.836375       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 23:03:12.836486       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:03:12.840380       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:03:12.840479       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:03:12.840534       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 23:03:12.840628       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 23:03:12.941464       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [def0377c6f0d18e86ee38cf410ad1b34f32ba4994e7e4e73c8c80acfcd0f81fd] <==
	E1207 23:02:15.676375       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:02:15.677255       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1207 23:02:15.702232       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1207 23:02:15.703102       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1207 23:02:15.772992       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:02:15.773938       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1207 23:02:15.776967       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1207 23:02:15.777880       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1207 23:02:15.808060       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1207 23:02:15.808975       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1207 23:02:15.828950       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1207 23:02:15.829754       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1207 23:02:15.847911       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1207 23:02:15.850936       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1207 23:02:15.898865       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:02:15.899689       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1207 23:02:15.951270       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1207 23:02:15.952209       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	I1207 23:02:17.937339       1 shared_informer.go:377] "Caches are synced"
	I1207 23:03:09.602953       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1207 23:03:09.603068       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1207 23:03:09.603098       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1207 23:03:09.603105       1 server.go:265] "[graceful-termination] secure server is exiting"
	I1207 23:03:09.603097       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1207 23:03:09.603128       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 07 23:13:06 functional-372098 kubelet[4673]: E1207 23:13:06.635386    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-rt6h6" podUID="4120483f-8517-4e8a-87e9-d3efcbee6356"
	Dec 07 23:13:06 functional-372098 kubelet[4673]: E1207 23:13:06.636183    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="07af374e-0a50-41d2-a6ec-bc5002beba0e"
	Dec 07 23:13:06 functional-372098 kubelet[4673]: E1207 23:13:06.636517    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nghff" podUID="7558f757-a03e-4c8d-a71a-92ce283b4c06"
	Dec 07 23:13:09 functional-372098 kubelet[4673]: E1207 23:13:09.635833    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-h9ctn" podUID="407fccce-e612-4fd6-9dea-5557b2a23bcb"
	Dec 07 23:13:11 functional-372098 kubelet[4673]: E1207 23:13:11.636102    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-swxtx" podUID="40f4d455-e467-4886-9c6d-8ee0145d3d14"
	Dec 07 23:13:18 functional-372098 kubelet[4673]: E1207 23:13:18.635034    4673 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nghff" containerName="kubernetes-dashboard"
	Dec 07 23:13:18 functional-372098 kubelet[4673]: E1207 23:13:18.636294    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nghff" podUID="7558f757-a03e-4c8d-a71a-92ce283b4c06"
	Dec 07 23:13:19 functional-372098 kubelet[4673]: E1207 23:13:19.634554    4673 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-zbt2f" containerName="dashboard-metrics-scraper"
	Dec 07 23:13:19 functional-372098 kubelet[4673]: E1207 23:13:19.635900    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-zbt2f" podUID="b536e949-6d38-400b-bece-d762f
111be94"
	Dec 07 23:13:20 functional-372098 kubelet[4673]: E1207 23:13:20.637123    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-h9ctn" podUID="407fccce-e612-4fd6-9dea-5557b2a23bcb"
	Dec 07 23:13:20 functional-372098 kubelet[4673]: E1207 23:13:20.638568    4673 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-k5tm5" containerName="coredns"
	Dec 07 23:13:20 functional-372098 kubelet[4673]: E1207 23:13:20.638641    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="95c0efdc-87b6-4bd8-aef3-566618b1523e"
	Dec 07 23:13:21 functional-372098 kubelet[4673]: E1207 23:13:21.635686    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-rt6h6" podUID="4120483f-8517-4e8a-87e9-d3efcbee6356"
	Dec 07 23:13:21 functional-372098 kubelet[4673]: E1207 23:13:21.636227    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="07af374e-0a50-41d2-a6ec-bc5002beba0e"
	Dec 07 23:13:24 functional-372098 kubelet[4673]: E1207 23:13:24.636656    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-swxtx" podUID="40f4d455-e467-4886-9c6d-8ee0145d3d14"
	Dec 07 23:13:30 functional-372098 kubelet[4673]: E1207 23:13:30.635909    4673 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nghff" containerName="kubernetes-dashboard"
	Dec 07 23:13:30 functional-372098 kubelet[4673]: E1207 23:13:30.637176    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-nghff" podUID="7558f757-a03e-4c8d-a71a-92ce283b4c06"
	Dec 07 23:13:31 functional-372098 kubelet[4673]: E1207 23:13:31.635380    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-h9ctn" podUID="407fccce-e612-4fd6-9dea-5557b2a23bcb"
	Dec 07 23:13:32 functional-372098 kubelet[4673]: E1207 23:13:32.635634    4673 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-zbt2f" containerName="dashboard-metrics-scraper"
	Dec 07 23:13:32 functional-372098 kubelet[4673]: E1207 23:13:32.635928    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-rt6h6" podUID="4120483f-8517-4e8a-87e9-d3efcbee6356"
	Dec 07 23:13:32 functional-372098 kubelet[4673]: E1207 23:13:32.636925    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-zbt2f" podUID="b536e949-6d38-400b-bece-d762f
111be94"
	Dec 07 23:13:34 functional-372098 kubelet[4673]: E1207 23:13:34.635748    4673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-372098" containerName="kube-apiserver"
	Dec 07 23:13:34 functional-372098 kubelet[4673]: E1207 23:13:34.636814    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="07af374e-0a50-41d2-a6ec-bc5002beba0e"
	Dec 07 23:13:35 functional-372098 kubelet[4673]: E1207 23:13:35.635137    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="95c0efdc-87b6-4bd8-aef3-566618b1523e"
	Dec 07 23:13:39 functional-372098 kubelet[4673]: E1207 23:13:39.636159    4673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-swxtx" podUID="40f4d455-e467-4886-9c6d-8ee0145d3d14"
	
	
	==> storage-provisioner [77c5419a5af32a3adbfdb08f245cbdfbee4d4594dda757a4a341591f0f5757ef] <==
	I1207 23:02:34.473761       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-372098_2723da14-e3ab-463a-b591-bd211f668a0a!
	W1207 23:02:36.381294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:36.384835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:38.387295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:38.390656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:40.394077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:40.397975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:42.400909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:42.405558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:44.409032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:44.412633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:46.416254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:46.420950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:48.424007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:48.428978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:50.431882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:50.436004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:52.438706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:52.442377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:54.445344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:54.450611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:56.453177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:56.457032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:58.459733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:02:58.463288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [bf437fd6022b37ff22a71dadc51d7104a570e3055a4a8150eb79becb4233753f] <==
	W1207 23:13:14.848979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:16.851409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:16.855097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:18.857302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:18.861757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:20.864131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:20.867760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:22.870615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:22.875267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:24.877828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:24.881350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:26.884758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:26.888408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:28.890668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:28.894096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:30.897163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:30.900798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:32.904252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:32.908731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:34.911570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:34.915279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:36.918184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:36.921643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:38.924314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:13:38.929057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-372098 -n functional-372098
helpers_test.go:269: (dbg) Run:  kubectl --context functional-372098 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-h9ctn hello-node-connect-9f67c86d4-rt6h6 mysql-844cf969f6-swxtx nginx-svc sp-pod dashboard-metrics-scraper-5565989548-zbt2f kubernetes-dashboard-b84665fb8-nghff
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-372098 describe pod busybox-mount hello-node-5758569b79-h9ctn hello-node-connect-9f67c86d4-rt6h6 mysql-844cf969f6-swxtx nginx-svc sp-pod dashboard-metrics-scraper-5565989548-zbt2f kubernetes-dashboard-b84665fb8-nghff
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-372098 describe pod busybox-mount hello-node-5758569b79-h9ctn hello-node-connect-9f67c86d4-rt6h6 mysql-844cf969f6-swxtx nginx-svc sp-pod dashboard-metrics-scraper-5565989548-zbt2f kubernetes-dashboard-b84665fb8-nghff: exit status 1 (93.148684ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:09:54 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://e14f9112ffcff7df1cd79e835cdc327efffd87d59175b498d92b80986079fea1
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 07 Dec 2025 23:09:55 +0000
	      Finished:     Sun, 07 Dec 2025 23:09:55 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zfsq7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zfsq7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m46s  default-scheduler  Successfully assigned default/busybox-mount to functional-372098
	  Normal  Pulling    3m45s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m45s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 625ms (625ms including waiting). Image size: 2395207 bytes.
	  Normal  Created    3m45s  kubelet            Container created
	  Normal  Started    3m45s  kubelet            Container started
	
	
	Name:             hello-node-5758569b79-h9ctn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:38 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bq8h8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bq8h8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-5758569b79-h9ctn to functional-372098
	  Normal   Pulling    6m55s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m53s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   6m53s (x5 over 10m)   kubelet  Error: ErrImagePull
	  Warning  Failed   4m54s (x20 over 10m)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m41s (x21 over 10m)  kubelet  Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-rt6h6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:09:18 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tsmdx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tsmdx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m22s                default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-rt6h6 to functional-372098
	  Normal   Pulling    86s (x5 over 4m22s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     85s (x5 over 4m21s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   85s (x5 over 4m21s)   kubelet  Error: ErrImagePull
	  Warning  Failed   19s (x15 over 4m20s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  8s (x16 over 4m20s)   kubelet  Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-844cf969f6-swxtx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:38 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2n2f6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2n2f6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-844cf969f6-swxtx to functional-372098
	  Warning  Failed     8m37s (x3 over 9m46s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  7m15s (x5 over 10m)  kubelet  Pulling image "docker.io/mysql:5.7"
	  Warning  Failed   7m14s (x2 over 10m)  kubelet  Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   7m14s (x5 over 10m)   kubelet  Error: ErrImagePull
	  Warning  Failed   4m49s (x20 over 10m)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  1s (x42 over 10m)     kubelet  Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:39 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8cjdf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8cjdf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/nginx-svc to functional-372098
	  Normal   Pulling    7m13s (x5 over 10m)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     7m12s (x5 over 9m59s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   7m12s (x5 over 9m59s)   kubelet  Error: ErrImagePull
	  Warning  Failed   4m54s (x20 over 9m59s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m42s (x21 over 9m59s)  kubelet  Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-372098/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 23:03:48 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b56ct (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-b56ct:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m52s                  default-scheduler  Successfully assigned default/sp-pod to functional-372098
	  Normal   Pulling    6m55s (x5 over 9m51s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     6m54s (x5 over 9m50s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   6m54s (x5 over 9m50s)   kubelet  Error: ErrImagePull
	  Normal   BackOff  4m46s (x21 over 9m50s)  kubelet  Back-off pulling image "docker.io/nginx"
	  Warning  Failed   4m46s (x21 over 9m50s)  kubelet  Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-zbt2f" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-nghff" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-372098 describe pod busybox-mount hello-node-5758569b79-h9ctn hello-node-connect-9f67c86d4-rt6h6 mysql-844cf969f6-swxtx nginx-svc sp-pod dashboard-metrics-scraper-5565989548-zbt2f kubernetes-dashboard-b84665fb8-nghff: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-372098 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-372098 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-h9ctn" [407fccce-e612-4fd6-9dea-5557b2a23bcb] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-372098 -n functional-372098
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-07 23:13:39.111847533 +0000 UTC m=+2559.264513617
functional_test.go:1460: (dbg) Run:  kubectl --context functional-372098 describe po hello-node-5758569b79-h9ctn -n default
functional_test.go:1460: (dbg) kubectl --context functional-372098 describe po hello-node-5758569b79-h9ctn -n default:
Name:             hello-node-5758569b79-h9ctn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-372098/192.168.49.2
Start Time:       Sun, 07 Dec 2025 23:03:38 +0000
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bq8h8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bq8h8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-5758569b79-h9ctn to functional-372098
Normal   Pulling    6m54s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m52s (x5 over 9m59s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   6m52s (x5 over 9m59s)   kubelet  Error: ErrImagePull
Warning  Failed   4m53s (x20 over 9m59s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m40s (x21 over 9m59s)  kubelet  Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-372098 logs hello-node-5758569b79-h9ctn -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-372098 logs hello-node-5758569b79-h9ctn -n default: exit status 1 (61.171851ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-5758569b79-h9ctn" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-372098 logs hello-node-5758569b79-h9ctn -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (240.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-372098 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [07af374e-0a50-41d2-a6ec-bc5002beba0e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
functional_test_tunnel_test.go:216: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-372098 -n functional-372098
functional_test_tunnel_test.go:216: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-12-07 23:07:40.139670497 +0000 UTC m=+2200.292336581
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-372098 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-372098 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-372098/192.168.49.2
Start Time:       Sun, 07 Dec 2025 23:03:39 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:  10.244.0.6
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8cjdf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8cjdf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m1s                 default-scheduler  Successfully assigned default/nginx-svc to functional-372098
Normal   Pulling    73s (x5 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     72s (x5 over 3m59s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   72s (x5 over 3m59s)  kubelet  Error: ErrImagePull
Normal   BackOff  3s (x15 over 3m59s)  kubelet  Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed   3s (x15 over 3m59s)  kubelet  Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-372098 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-372098 logs nginx-svc -n default: exit status 1 (65.322303ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-372098 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (240.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (97.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
I1207 23:07:40.267855  375863 retry.go:31] will retry after 3.850302686s: Temporary Error: Get "http:": http: no Host in request URL
I1207 23:07:44.118460  375863 retry.go:31] will retry after 6.736930247s: Temporary Error: Get "http:": http: no Host in request URL
I1207 23:07:50.855971  375863 retry.go:31] will retry after 6.230572868s: Temporary Error: Get "http:": http: no Host in request URL
I1207 23:07:57.086751  375863 retry.go:31] will retry after 10.045643767s: Temporary Error: Get "http:": http: no Host in request URL
I1207 23:08:07.133207  375863 retry.go:31] will retry after 16.312612607s: Temporary Error: Get "http:": http: no Host in request URL
E1207 23:08:16.814964  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1207 23:08:23.446601  375863 retry.go:31] will retry after 31.206709658s: Temporary Error: Get "http:": http: no Host in request URL
E1207 23:08:38.051750  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1207 23:08:54.654457  375863 retry.go:31] will retry after 23.452608326s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-372098 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
nginx-svc   LoadBalancer   10.100.28.61   10.100.28.61   80:31950/TCP   5m39s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (97.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-372098 service --namespace=default --https --url hello-node: exit status 115 (524.994219ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30112
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-372098 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-372098 service hello-node --url --format={{.IP}}: exit status 115 (518.532478ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-372098 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-372098 service hello-node --url: exit status 115 (518.572537ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30112
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-372098 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30112
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.52s)

                                                
                                    

Test pass (362/418)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 3.83
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 3.75
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.57
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.39
30 TestBinaryMirror 0.8
31 TestOffline 61.87
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 122.82
38 TestAddons/serial/Volcano 39.02
40 TestAddons/serial/GCPAuth/Namespaces 0.11
41 TestAddons/serial/GCPAuth/FakeCredentials 7.44
44 TestAddons/parallel/Registry 13.53
45 TestAddons/parallel/RegistryCreds 0.66
47 TestAddons/parallel/InspektorGadget 11.65
48 TestAddons/parallel/MetricsServer 5.61
50 TestAddons/parallel/CSI 49.99
51 TestAddons/parallel/Headlamp 16.42
52 TestAddons/parallel/CloudSpanner 5.48
54 TestAddons/parallel/NvidiaDevicePlugin 6.48
55 TestAddons/parallel/Yakd 10.65
56 TestAddons/parallel/AmdGpuDevicePlugin 5.48
57 TestAddons/StoppedEnableDisable 12.44
58 TestCertOptions 25.74
59 TestCertExpiration 215.85
61 TestForceSystemdFlag 25.25
62 TestForceSystemdEnv 23.01
67 TestErrorSpam/setup 22.28
68 TestErrorSpam/start 0.63
69 TestErrorSpam/status 0.91
70 TestErrorSpam/pause 1.37
71 TestErrorSpam/unpause 1.47
72 TestErrorSpam/stop 1.48
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 40.63
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 5.71
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.06
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.35
84 TestFunctional/serial/CacheCmd/cache/add_local 1.19
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.46
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 46.96
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.15
95 TestFunctional/serial/LogsFileCmd 1.16
96 TestFunctional/serial/InvalidService 4.19
98 TestFunctional/parallel/ConfigCmd 0.44
100 TestFunctional/parallel/DryRun 0.37
101 TestFunctional/parallel/InternationalLanguage 0.19
102 TestFunctional/parallel/StatusCmd 1.02
107 TestFunctional/parallel/AddonsCmd 0.16
110 TestFunctional/parallel/SSHCmd 0.56
111 TestFunctional/parallel/CpCmd 1.77
113 TestFunctional/parallel/FileSync 0.28
114 TestFunctional/parallel/CertSync 1.69
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
122 TestFunctional/parallel/License 0.45
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
125 TestFunctional/parallel/Version/short 0.06
126 TestFunctional/parallel/Version/components 0.46
127 TestFunctional/parallel/MountCmd/any-port 5.91
128 TestFunctional/parallel/ProfileCmd/profile_list 0.42
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
130 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
133 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
134 TestFunctional/parallel/ImageCommands/ImageBuild 1.77
135 TestFunctional/parallel/ImageCommands/Setup 1.04
136 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
137 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
138 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.12
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.04
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.45
142 TestFunctional/parallel/MountCmd/specific-port 1.86
143 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
144 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
145 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.79
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
149 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
150 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
157 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
158 TestFunctional/parallel/ServiceCmd/List 1.69
159 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 36.88
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 5.59
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.46
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.15
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.28
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.47
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 39.67
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.06
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.13
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.15
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.21
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.49
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.37
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.16
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.91
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.17
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.61
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.87
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.33
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.87
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.06
214 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.66
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.44
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.46
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.22
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.22
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.22
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.22
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 1.78
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.41
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.28
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.12
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.42
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.32
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.46
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.61
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.35
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.41
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.38
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.39
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 5.51
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.62
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.91
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.72
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.69
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.01
263 TestMultiControlPlane/serial/StartCluster 139.74
264 TestMultiControlPlane/serial/DeployApp 4.01
265 TestMultiControlPlane/serial/PingHostFromPods 1.2
266 TestMultiControlPlane/serial/AddWorkerNode 25.98
267 TestMultiControlPlane/serial/NodeLabels 0.06
268 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
269 TestMultiControlPlane/serial/CopyFile 17.04
270 TestMultiControlPlane/serial/StopSecondaryNode 12.68
271 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
272 TestMultiControlPlane/serial/RestartSecondaryNode 8.48
273 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
274 TestMultiControlPlane/serial/RestartClusterKeepsNodes 89.97
275 TestMultiControlPlane/serial/DeleteSecondaryNode 9.2
276 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
277 TestMultiControlPlane/serial/StopCluster 36.01
278 TestMultiControlPlane/serial/RestartCluster 57.34
279 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
280 TestMultiControlPlane/serial/AddSecondaryNode 48.84
281 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
286 TestJSONOutput/start/Command 36.39
287 TestJSONOutput/start/Audit 0
289 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
290 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
292 TestJSONOutput/pause/Command 0.65
293 TestJSONOutput/pause/Audit 0
295 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
296 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
298 TestJSONOutput/unpause/Command 0.58
299 TestJSONOutput/unpause/Audit 0
301 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
302 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
304 TestJSONOutput/stop/Command 5.85
305 TestJSONOutput/stop/Audit 0
307 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
308 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
309 TestErrorJSONOutput 0.23
311 TestKicCustomNetwork/create_custom_network 28.62
312 TestKicCustomNetwork/use_default_bridge_network 24.4
313 TestKicExistingNetwork 23.09
314 TestKicCustomSubnet 25.91
315 TestKicStaticIP 25.98
316 TestMainNoArgs 0.06
317 TestMinikubeProfile 47.95
320 TestMountStart/serial/StartWithMountFirst 4.18
321 TestMountStart/serial/VerifyMountFirst 0.26
322 TestMountStart/serial/StartWithMountSecond 4.2
323 TestMountStart/serial/VerifyMountSecond 0.26
324 TestMountStart/serial/DeleteFirst 1.65
325 TestMountStart/serial/VerifyMountPostDelete 0.27
326 TestMountStart/serial/Stop 1.25
327 TestMountStart/serial/RestartStopped 6.75
328 TestMountStart/serial/VerifyMountPostStop 0.27
331 TestMultiNode/serial/FreshStart2Nodes 60.89
332 TestMultiNode/serial/DeployApp2Nodes 3.5
333 TestMultiNode/serial/PingHostFrom2Pods 0.82
334 TestMultiNode/serial/AddNode 21.71
335 TestMultiNode/serial/MultiNodeLabels 0.06
336 TestMultiNode/serial/ProfileList 0.63
337 TestMultiNode/serial/CopyFile 9.69
338 TestMultiNode/serial/StopNode 2.22
339 TestMultiNode/serial/StartAfterStop 6.71
340 TestMultiNode/serial/RestartKeepsNodes 73.3
341 TestMultiNode/serial/DeleteNode 5.14
342 TestMultiNode/serial/StopMultiNode 23.95
343 TestMultiNode/serial/RestartMultiNode 49.01
344 TestMultiNode/serial/ValidateNameConflict 23.43
349 TestPreload 102.26
351 TestScheduledStopUnix 97.78
354 TestInsufficientStorage 11.19
355 TestRunningBinaryUpgrade 291.01
357 TestKubernetesUpgrade 329.05
358 TestMissingContainerUpgrade 129.97
359 TestStoppedBinaryUpgrade/Setup 0.93
360 TestStoppedBinaryUpgrade/Upgrade 312.71
369 TestPause/serial/Start 39.52
370 TestPause/serial/SecondStartNoReconfiguration 5.48
371 TestPause/serial/Pause 0.65
372 TestPause/serial/VerifyStatus 0.34
373 TestPause/serial/Unpause 0.61
374 TestPause/serial/PauseAgain 0.63
375 TestPause/serial/DeletePaused 2.66
376 TestPause/serial/VerifyDeletedResources 42.79
378 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
379 TestNoKubernetes/serial/StartWithK8s 19.93
380 TestNoKubernetes/serial/StartWithStopK8s 21.96
381 TestNoKubernetes/serial/Start 6.37
382 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
383 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
384 TestNoKubernetes/serial/ProfileList 1.71
385 TestNoKubernetes/serial/Stop 1.28
386 TestNoKubernetes/serial/StartNoArgs 5.91
387 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
395 TestNetworkPlugins/group/false 3.56
399 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
401 TestStartStop/group/old-k8s-version/serial/FirstStart 51.07
403 TestStartStop/group/no-preload/serial/FirstStart 46.57
405 TestStartStop/group/embed-certs/serial/FirstStart 42.47
406 TestStartStop/group/old-k8s-version/serial/DeployApp 9.25
407 TestStartStop/group/embed-certs/serial/DeployApp 7.23
408 TestStartStop/group/no-preload/serial/DeployApp 8.23
409 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.86
410 TestStartStop/group/old-k8s-version/serial/Stop 11.99
411 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.76
412 TestStartStop/group/embed-certs/serial/Stop 12.01
413 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.79
414 TestStartStop/group/no-preload/serial/Stop 12.03
415 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
416 TestStartStop/group/old-k8s-version/serial/SecondStart 52.78
417 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
418 TestStartStop/group/embed-certs/serial/SecondStart 48.47
419 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
420 TestStartStop/group/no-preload/serial/SecondStart 52.04
421 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
422 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
423 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
424 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
425 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
426 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
427 TestStartStop/group/embed-certs/serial/Pause 2.79
428 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
429 TestStartStop/group/old-k8s-version/serial/Pause 2.85
430 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
432 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.04
433 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
434 TestStartStop/group/no-preload/serial/Pause 3.73
436 TestStartStop/group/newest-cni/serial/FirstStart 22.7
437 TestNetworkPlugins/group/auto/Start 42.91
438 TestStartStop/group/newest-cni/serial/DeployApp 0
439 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.79
440 TestStartStop/group/newest-cni/serial/Stop 1.48
441 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.35
442 TestStartStop/group/newest-cni/serial/SecondStart 11.1
443 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
444 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
445 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
446 TestStartStop/group/newest-cni/serial/Pause 2.55
447 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.34
448 TestNetworkPlugins/group/kindnet/Start 38.5
449 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
450 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.14
451 TestNetworkPlugins/group/calico/Start 53.89
452 TestNetworkPlugins/group/auto/KubeletFlags 0.33
453 TestNetworkPlugins/group/auto/NetCatPod 10.2
454 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
455 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.7
456 TestNetworkPlugins/group/auto/DNS 0.16
457 TestNetworkPlugins/group/auto/Localhost 0.12
458 TestNetworkPlugins/group/auto/HairPin 0.13
459 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
460 TestNetworkPlugins/group/custom-flannel/Start 52.76
461 TestNetworkPlugins/group/kindnet/KubeletFlags 0.48
462 TestNetworkPlugins/group/kindnet/NetCatPod 8.98
463 TestNetworkPlugins/group/kindnet/DNS 0.15
464 TestNetworkPlugins/group/kindnet/Localhost 0.12
465 TestNetworkPlugins/group/kindnet/HairPin 0.12
466 TestNetworkPlugins/group/calico/ControllerPod 6.01
467 TestNetworkPlugins/group/calico/KubeletFlags 0.34
468 TestNetworkPlugins/group/calico/NetCatPod 9.23
469 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
470 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
471 TestNetworkPlugins/group/enable-default-cni/Start 70.15
472 TestNetworkPlugins/group/calico/DNS 0.15
473 TestNetworkPlugins/group/calico/Localhost 0.12
474 TestNetworkPlugins/group/calico/HairPin 0.12
475 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.4
476 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.97
477 TestNetworkPlugins/group/flannel/Start 50.07
478 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
479 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.32
480 TestNetworkPlugins/group/bridge/Start 63.32
481 TestNetworkPlugins/group/custom-flannel/DNS 0.24
482 TestNetworkPlugins/group/custom-flannel/Localhost 0.29
483 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
484 TestNetworkPlugins/group/flannel/ControllerPod 6
485 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
486 TestNetworkPlugins/group/flannel/NetCatPod 9.16
487 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
488 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.17
489 TestNetworkPlugins/group/flannel/DNS 0.13
490 TestNetworkPlugins/group/flannel/Localhost 0.11
491 TestNetworkPlugins/group/flannel/HairPin 0.11
492 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
493 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
494 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
495 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
496 TestNetworkPlugins/group/bridge/NetCatPod 8.22
497 TestNetworkPlugins/group/bridge/DNS 0.15
498 TestNetworkPlugins/group/bridge/Localhost 0.11
499 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (3.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-868109 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-868109 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.825446416s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (3.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1207 22:31:03.711849  375863 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1207 22:31:03.711952  375863 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-868109
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-868109: exit status 85 (75.549491ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-868109 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-868109 │ jenkins │ v1.37.0 │ 07 Dec 25 22:30 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:30:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:30:59.937636  375875 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:30:59.937748  375875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:30:59.937756  375875 out.go:374] Setting ErrFile to fd 2...
	I1207 22:30:59.937760  375875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:30:59.937974  375875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	W1207 22:30:59.938076  375875 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22054-372312/.minikube/config/config.json: open /home/jenkins/minikube-integration/22054-372312/.minikube/config/config.json: no such file or directory
	I1207 22:30:59.938495  375875 out.go:368] Setting JSON to true
	I1207 22:30:59.939449  375875 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4409,"bootTime":1765142251,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:30:59.939510  375875 start.go:143] virtualization: kvm guest
	I1207 22:30:59.942874  375875 out.go:99] [download-only-868109] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1207 22:30:59.943001  375875 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball: no such file or directory
	I1207 22:30:59.943061  375875 notify.go:221] Checking for updates...
	I1207 22:30:59.944231  375875 out.go:171] MINIKUBE_LOCATION=22054
	I1207 22:30:59.945591  375875 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:30:59.946808  375875 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 22:30:59.947862  375875 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 22:30:59.948866  375875 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1207 22:30:59.950743  375875 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 22:30:59.951000  375875 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:30:59.974325  375875 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:30:59.974401  375875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:31:00.028245  375875 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-07 22:31:00.019111799 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:31:00.028347  375875 docker.go:319] overlay module found
	I1207 22:31:00.029937  375875 out.go:99] Using the docker driver based on user configuration
	I1207 22:31:00.029964  375875 start.go:309] selected driver: docker
	I1207 22:31:00.029971  375875 start.go:927] validating driver "docker" against <nil>
	I1207 22:31:00.030056  375875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:31:00.083540  375875 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-07 22:31:00.074473334 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:31:00.083875  375875 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 22:31:00.084381  375875 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1207 22:31:00.084513  375875 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 22:31:00.086223  375875 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-868109 host does not exist
	  To start a cluster, run: "minikube start -p download-only-868109"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-868109
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-899458 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-899458 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.749214368s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1207 22:31:07.893586  375863 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1207 22:31:07.893645  375863 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-899458
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-899458: exit status 85 (70.973088ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-868109 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-868109 │ jenkins │ v1.37.0 │ 07 Dec 25 22:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │ 07 Dec 25 22:31 UTC │
	│ delete  │ -p download-only-868109                                                                                                                                                               │ download-only-868109 │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │ 07 Dec 25 22:31 UTC │
	│ start   │ -o=json --download-only -p download-only-899458 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-899458 │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:31:04
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:31:04.199671  376235 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:31:04.199945  376235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:31:04.199954  376235 out.go:374] Setting ErrFile to fd 2...
	I1207 22:31:04.199958  376235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:31:04.200156  376235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 22:31:04.200587  376235 out.go:368] Setting JSON to true
	I1207 22:31:04.201504  376235 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4413,"bootTime":1765142251,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:31:04.201556  376235 start.go:143] virtualization: kvm guest
	I1207 22:31:04.203269  376235 out.go:99] [download-only-899458] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:31:04.203447  376235 notify.go:221] Checking for updates...
	I1207 22:31:04.204754  376235 out.go:171] MINIKUBE_LOCATION=22054
	I1207 22:31:04.206008  376235 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:31:04.207062  376235 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 22:31:04.208175  376235 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 22:31:04.209241  376235 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1207 22:31:04.211281  376235 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 22:31:04.211603  376235 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:31:04.234636  376235 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:31:04.234709  376235 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:31:04.285815  376235 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-07 22:31:04.276919097 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:31:04.285922  376235 docker.go:319] overlay module found
	I1207 22:31:04.287523  376235 out.go:99] Using the docker driver based on user configuration
	I1207 22:31:04.287551  376235 start.go:309] selected driver: docker
	I1207 22:31:04.287561  376235 start.go:927] validating driver "docker" against <nil>
	I1207 22:31:04.287649  376235 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:31:04.336433  376235 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-07 22:31:04.327630766 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:31:04.336645  376235 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 22:31:04.337176  376235 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1207 22:31:04.337347  376235 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 22:31:04.338976  376235 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-899458 host does not exist
	  To start a cluster, run: "minikube start -p download-only-899458"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-899458
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-009383 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-009383 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.564728775s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1207 22:31:11.883180  375863 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1207 22:31:11.883225  375863 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-372312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-009383
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-009383: exit status 85 (72.389405ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-868109 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-868109 │ jenkins │ v1.37.0 │ 07 Dec 25 22:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │ 07 Dec 25 22:31 UTC │
	│ delete  │ -p download-only-868109                                                                                                                                                                      │ download-only-868109 │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │ 07 Dec 25 22:31 UTC │
	│ start   │ -o=json --download-only -p download-only-899458 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-899458 │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │ 07 Dec 25 22:31 UTC │
	│ delete  │ -p download-only-899458                                                                                                                                                                      │ download-only-899458 │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │ 07 Dec 25 22:31 UTC │
	│ start   │ -o=json --download-only -p download-only-009383 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-009383 │ jenkins │ v1.37.0 │ 07 Dec 25 22:31 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:31:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:31:08.371978  376588 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:31:08.372225  376588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:31:08.372234  376588 out.go:374] Setting ErrFile to fd 2...
	I1207 22:31:08.372239  376588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:31:08.372394  376588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 22:31:08.372849  376588 out.go:368] Setting JSON to true
	I1207 22:31:08.373715  376588 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4417,"bootTime":1765142251,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:31:08.373766  376588 start.go:143] virtualization: kvm guest
	I1207 22:31:08.375437  376588 out.go:99] [download-only-009383] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:31:08.375624  376588 notify.go:221] Checking for updates...
	I1207 22:31:08.376895  376588 out.go:171] MINIKUBE_LOCATION=22054
	I1207 22:31:08.378322  376588 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:31:08.379551  376588 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 22:31:08.383884  376588 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 22:31:08.385060  376588 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1207 22:31:08.386976  376588 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 22:31:08.387199  376588 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:31:08.409052  376588 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:31:08.409154  376588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:31:08.462195  376588 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-07 22:31:08.453118445 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:31:08.462315  376588 docker.go:319] overlay module found
	I1207 22:31:08.464040  376588 out.go:99] Using the docker driver based on user configuration
	I1207 22:31:08.464069  376588 start.go:309] selected driver: docker
	I1207 22:31:08.464074  376588 start.go:927] validating driver "docker" against <nil>
	I1207 22:31:08.464157  376588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:31:08.517989  376588 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-07 22:31:08.508165385 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:31:08.518180  376588 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 22:31:08.518710  376588 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1207 22:31:08.518871  376588 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 22:31:08.520578  376588 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-009383 host does not exist
	  To start a cluster, run: "minikube start -p download-only-009383"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-009383
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-804251 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-804251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-804251
--- PASS: TestDownloadOnlyKic (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1207 22:31:13.120147  375863 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-557917 --alsologtostderr --binary-mirror http://127.0.0.1:44153 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-557917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-557917
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (61.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-293616 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-293616 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (59.358682982s)
helpers_test.go:175: Cleaning up "offline-containerd-293616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-293616
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-293616: (2.511023661s)
--- PASS: TestOffline (61.87s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-007439
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-007439: exit status 85 (70.144789ms)

                                                
                                                
-- stdout --
	* Profile "addons-007439" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-007439"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-007439
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-007439: exit status 85 (69.43822ms)

                                                
                                                
-- stdout --
	* Profile "addons-007439" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-007439"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (122.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-007439 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-007439 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m2.818268528s)
--- PASS: TestAddons/Setup (122.82s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.02s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 13.729129ms
addons_test.go:876: volcano-admission stabilized in 13.988721ms
addons_test.go:868: volcano-scheduler stabilized in 14.137594ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-nq89n" [20db0d86-976c-4ed8-8b08-b7b285dd9e2c] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.002966668s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-phppc" [b395fd3f-5c3e-4850-97ed-5709b3f18a65] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.002674084s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-ddcx7" [48d55264-8553-4f05-8bba-c112293cb965] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002714866s
addons_test.go:903: (dbg) Run:  kubectl --context addons-007439 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-007439 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-007439 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [b6b7656d-fb9c-46dc-b799-98a79de3da57] Pending
helpers_test.go:352: "test-job-nginx-0" [b6b7656d-fb9c-46dc-b799-98a79de3da57] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [b6b7656d-fb9c-46dc-b799-98a79de3da57] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003797909s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-007439 addons disable volcano --alsologtostderr -v=1: (11.69644459s)
--- PASS: TestAddons/serial/Volcano (39.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-007439 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-007439 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-007439 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-007439 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bdaf9cdc-04a9-4afe-b740-3aad3a1761a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bdaf9cdc-04a9-4afe-b740-3aad3a1761a7] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003346685s
addons_test.go:694: (dbg) Run:  kubectl --context addons-007439 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-007439 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-007439 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.44s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.906155ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-6p6bs" [0a78b775-c4c2-437d-8f4d-28a45858c31d] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002684945s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-7n2kv" [03a65721-16eb-45cd-9dda-3c1a53f037f8] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002702299s
addons_test.go:392: (dbg) Run:  kubectl --context addons-007439 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-007439 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-007439 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.759463066s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 ip
2025/12/07 22:34:25 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.53s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 36.796423ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-007439
addons_test.go:332: (dbg) Run:  kubectl --context addons-007439 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.65s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-fpz5v" [6abe4a1c-bcf2-4b13-95e2-06e844eabcda] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003786709s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-007439 addons disable inspektor-gadget --alsologtostderr -v=1: (5.640647846s)
--- PASS: TestAddons/parallel/InspektorGadget (11.65s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.388087ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-xclmv" [127d9996-068b-4d23-8215-895807a75987] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002505636s
addons_test.go:463: (dbg) Run:  kubectl --context addons-007439 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.61s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1207 22:34:18.750954  375863 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1207 22:34:18.753956  375863 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1207 22:34:18.753978  375863 kapi.go:107] duration metric: took 3.049494ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.059206ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-007439 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-007439 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b20fa634-7ad8-4417-b7b7-950f94d8b8f6] Pending
helpers_test.go:352: "task-pv-pod" [b20fa634-7ad8-4417-b7b7-950f94d8b8f6] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.002799593s
addons_test.go:572: (dbg) Run:  kubectl --context addons-007439 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-007439 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-007439 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-007439 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-007439 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-007439 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-007439 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-007439 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [f4d58a5a-f1cf-409d-a5f0-516e0f4437b1] Pending
helpers_test.go:352: "task-pv-pod-restore" [f4d58a5a-f1cf-409d-a5f0-516e0f4437b1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [f4d58a5a-f1cf-409d-a5f0-516e0f4437b1] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.002910994s
addons_test.go:614: (dbg) Run:  kubectl --context addons-007439 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-007439 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-007439 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-007439 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.513523495s)
--- PASS: TestAddons/parallel/CSI (49.99s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-007439 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-95mh9" [fe54c9e7-2df6-484e-adf1-87cb42ccfcb5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-95mh9" [fe54c9e7-2df6-484e-adf1-87cb42ccfcb5] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003375968s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-007439 addons disable headlamp --alsologtostderr -v=1: (5.658230028s)
--- PASS: TestAddons/parallel/Headlamp (16.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-szbxm" [6de828fa-6dfc-429e-be1e-44ef12c57c9b] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003034946s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-ntq2v" [0b9135a9-fc92-4f62-9720-da68d90dcd42] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.002866953s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-xrbm6" [33fdb6ef-cc9c-408d-8191-d473b4b7d443] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003151212s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-007439 addons disable yakd --alsologtostderr -v=1: (5.640890073s)
--- PASS: TestAddons/parallel/Yakd (10.65s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-vxzwd" [a0965c42-1d83-4d06-a17b-c97d11433f73] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003616867s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-007439 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-007439
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-007439: (12.156917035s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-007439
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-007439
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-007439
--- PASS: TestAddons/StoppedEnableDisable (12.44s)

                                                
                                    
x
+
TestCertOptions (25.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-191159 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-191159 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (22.80288302s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-191159 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-191159 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-191159 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-191159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-191159
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-191159: (2.252735032s)
--- PASS: TestCertOptions (25.74s)

                                                
                                    
x
+
TestCertExpiration (215.85s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-585553 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-585553 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (27.62817355s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-585553 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-585553 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.340426482s)
helpers_test.go:175: Cleaning up "cert-expiration-585553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-585553
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-585553: (2.884937873s)
--- PASS: TestCertExpiration (215.85s)

                                                
                                    
x
+
TestForceSystemdFlag (25.25s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-164801 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-164801 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.827744272s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-164801 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-164801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-164801
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-164801: (2.088914127s)
--- PASS: TestForceSystemdFlag (25.25s)

                                                
                                    
x
+
TestForceSystemdEnv (23.01s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-163299 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-163299 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (20.102770851s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-163299 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-163299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-163299
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-163299: (2.546136222s)
--- PASS: TestForceSystemdEnv (23.01s)

                                                
                                    
x
+
TestErrorSpam/setup (22.28s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-275240 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-275240 --driver=docker  --container-runtime=containerd
E1207 22:43:37.316256  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:43:57.798495  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-275240 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-275240 --driver=docker  --container-runtime=containerd: (22.282146215s)
--- PASS: TestErrorSpam/setup (22.28s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 status
--- PASS: TestErrorSpam/status (0.91s)

                                                
                                    
x
+
TestErrorSpam/pause (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 pause
--- PASS: TestErrorSpam/pause (1.37s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 unpause
--- PASS: TestErrorSpam/unpause (1.47s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 stop: (1.280614944s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-275240 --log_dir /tmp/nospam-275240 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22054-372312/.minikube/files/etc/test/nested/copy/375863/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.63s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-463278 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1207 22:44:38.760104  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-463278 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (40.631438925s)
--- PASS: TestFunctional/serial/StartWithProxy (40.63s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1207 22:44:48.935288  375863 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-463278 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-463278 --alsologtostderr -v=8: (5.706393219s)
functional_test.go:678: soft start took 5.707188467s for "functional-463278" cluster.
I1207 22:44:54.642073  375863 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (5.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-463278 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-463278 /tmp/TestFunctionalserialCacheCmdcacheadd_local2731600784/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 cache add minikube-local-cache-test:functional-463278
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 cache delete minikube-local-cache-test:functional-463278
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-463278
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463278 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (279.592392ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 kubectl -- --context functional-463278 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-463278 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-463278 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-463278 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.955050957s)
functional_test.go:776: restart took 46.955200418s for "functional-463278" cluster.
I1207 22:45:47.444276  375863 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (46.96s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-463278 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-463278 logs: (1.14864186s)
--- PASS: TestFunctional/serial/LogsCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 logs --file /tmp/TestFunctionalserialLogsFileCmd3190446891/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-463278 logs --file /tmp/TestFunctionalserialLogsFileCmd3190446891/001/logs.txt: (1.157370869s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-463278 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-463278
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-463278: exit status 115 (331.922744ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30528 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-463278 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463278 config get cpus: exit status 14 (84.008868ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463278 config get cpus: exit status 14 (78.522139ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-463278 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-463278 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (160.621195ms)

                                                
                                                
-- stdout --
	* [functional-463278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:45:56.927964  423349 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:45:56.928101  423349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:45:56.928111  423349 out.go:374] Setting ErrFile to fd 2...
	I1207 22:45:56.928115  423349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:45:56.928329  423349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 22:45:56.928784  423349 out.go:368] Setting JSON to false
	I1207 22:45:56.929826  423349 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5306,"bootTime":1765142251,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:45:56.929883  423349 start.go:143] virtualization: kvm guest
	I1207 22:45:56.931396  423349 out.go:179] * [functional-463278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:45:56.932759  423349 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:45:56.932755  423349 notify.go:221] Checking for updates...
	I1207 22:45:56.935598  423349 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:45:56.936715  423349 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 22:45:56.937779  423349 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 22:45:56.938851  423349 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:45:56.939859  423349 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:45:56.941126  423349 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 22:45:56.941638  423349 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:45:56.964929  423349 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:45:56.965045  423349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:45:57.024338  423349 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:61 SystemTime:2025-12-07 22:45:57.013952608 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:45:57.024440  423349 docker.go:319] overlay module found
	I1207 22:45:57.026217  423349 out.go:179] * Using the docker driver based on existing profile
	I1207 22:45:57.027260  423349 start.go:309] selected driver: docker
	I1207 22:45:57.027275  423349 start.go:927] validating driver "docker" against &{Name:functional-463278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-463278 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:45:57.027361  423349 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:45:57.028781  423349 out.go:203] 
	W1207 22:45:57.029689  423349 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1207 22:45:57.030657  423349 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-463278 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-463278 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-463278 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (184.950866ms)

                                                
                                                
-- stdout --
	* [functional-463278] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:45:56.455779  423007 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:45:56.455878  423007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:45:56.455889  423007 out.go:374] Setting ErrFile to fd 2...
	I1207 22:45:56.455896  423007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:45:56.456224  423007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 22:45:56.456663  423007 out.go:368] Setting JSON to false
	I1207 22:45:56.457759  423007 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5305,"bootTime":1765142251,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:45:56.457819  423007 start.go:143] virtualization: kvm guest
	I1207 22:45:56.460184  423007 out.go:179] * [functional-463278] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1207 22:45:56.461456  423007 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:45:56.461477  423007 notify.go:221] Checking for updates...
	I1207 22:45:56.463458  423007 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:45:56.464544  423007 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 22:45:56.465857  423007 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 22:45:56.470114  423007 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:45:56.471263  423007 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:45:56.472914  423007 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 22:45:56.473740  423007 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:45:56.498652  423007 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:45:56.498828  423007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:45:56.560405  423007 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-07 22:45:56.549968936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:45:56.560499  423007 docker.go:319] overlay module found
	I1207 22:45:56.562275  423007 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1207 22:45:56.563582  423007 start.go:309] selected driver: docker
	I1207 22:45:56.563595  423007 start.go:927] validating driver "docker" against &{Name:functional-463278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-463278 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:45:56.563707  423007 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:45:56.565383  423007 out.go:203] 
	W1207 22:45:56.566409  423007 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1207 22:45:56.567466  423007 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh -n functional-463278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 cp functional-463278:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1591724965/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh -n functional-463278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh -n functional-463278 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/375863/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "sudo cat /etc/test/nested/copy/375863/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/375863.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "sudo cat /etc/ssl/certs/375863.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/375863.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "sudo cat /usr/share/ca-certificates/375863.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3758632.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "sudo cat /etc/ssl/certs/3758632.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3758632.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "sudo cat /usr/share/ca-certificates/3758632.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-463278 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463278 ssh "sudo systemctl is-active docker": exit status 1 (302.21948ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463278 ssh "sudo systemctl is-active crio": exit status 1 (300.148357ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-463278 /tmp/TestFunctionalparallelMountCmdany-port2356503810/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765147554469153433" to /tmp/TestFunctionalparallelMountCmdany-port2356503810/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765147554469153433" to /tmp/TestFunctionalparallelMountCmdany-port2356503810/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765147554469153433" to /tmp/TestFunctionalparallelMountCmdany-port2356503810/001/test-1765147554469153433
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463278 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (316.302749ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 22:45:54.785875  375863 retry.go:31] will retry after 573.084349ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  7 22:45 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  7 22:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  7 22:45 test-1765147554469153433
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh cat /mount-9p/test-1765147554469153433
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-463278 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [710dc77c-ec68-4664-bed8-86fb0d37ba85] Pending
helpers_test.go:352: "busybox-mount" [710dc77c-ec68-4664-bed8-86fb0d37ba85] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [710dc77c-ec68-4664-bed8-86fb0d37ba85] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [710dc77c-ec68-4664-bed8-86fb0d37ba85] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.004394208s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-463278 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-463278 /tmp/TestFunctionalparallelMountCmdany-port2356503810/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.91s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "346.033434ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "73.562491ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "366.78431ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "71.858916ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-463278 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-463278
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-463278
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-463278 image ls --format short --alsologtostderr:
I1207 22:52:07.945487  432505 out.go:360] Setting OutFile to fd 1 ...
I1207 22:52:07.945770  432505 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:07.945780  432505 out.go:374] Setting ErrFile to fd 2...
I1207 22:52:07.945787  432505 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:07.946006  432505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
I1207 22:52:07.946531  432505 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1207 22:52:07.946664  432505 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1207 22:52:07.947129  432505 cli_runner.go:164] Run: docker container inspect functional-463278 --format={{.State.Status}}
I1207 22:52:07.964830  432505 ssh_runner.go:195] Run: systemctl --version
I1207 22:52:07.964881  432505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463278
I1207 22:52:07.981184  432505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-463278/id_rsa Username:docker}
I1207 22:52:08.072962  432505 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-463278 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:a5f569 │ 27.1MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ docker.io/library/minikube-local-cache-test │ functional-463278  │ sha256:9e81fd │ 991B   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:a3e246 │ 22.9MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:88320b │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/kicbase/echo-server               │ functional-463278  │ sha256:9056ab │ 2.37MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ localhost/my-image                          │ functional-463278  │ sha256:1942c2 │ 775kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:01e8ba │ 22.8MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:8aa150 │ 26MB   │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-463278 image ls --format table --alsologtostderr:
I1207 22:52:10.369296  433014 out.go:360] Setting OutFile to fd 1 ...
I1207 22:52:10.369574  433014 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:10.369585  433014 out.go:374] Setting ErrFile to fd 2...
I1207 22:52:10.369589  433014 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:10.369830  433014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
I1207 22:52:10.370436  433014 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1207 22:52:10.370555  433014 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1207 22:52:10.371036  433014 cli_runner.go:164] Run: docker container inspect functional-463278 --format={{.State.Status}}
I1207 22:52:10.388240  433014 ssh_runner.go:195] Run: systemctl --version
I1207 22:52:10.388290  433014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463278
I1207 22:52:10.405406  433014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-463278/id_rsa Username:docker}
I1207 22:52:10.495975  433014 ssh_runner.go:195] Run: sudo crictl images --output json
E1207 22:53:16.814382  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-463278 image ls --format json --alsologtostderr:
[{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:1942c284ca2fde915c169855727208779257b2935bc8051b020056b974196b86","repoDigests":[],"repoTags":["localhost/my-image:functional-463278"],"size":"774885"},{"id":"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"22871747"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-463278"],"size":"2372971"},{"id":"sha256:9e81fd33a07a95f92a3dd3c4eec8a5d86b5acc4278bcea9e4f58ced5f057307f","repoDigests":[],"repoTags":["docker.io/library/min
ikube-local-cache-test:functional-463278"],"size":"991"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"27060130"},{"id":"sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"22818657"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pau
se:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"25963482"},{"id":"sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"17382272"},{"id":"sha256:cd073f4c5f6a
8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-463278 image ls --format json --alsologtostderr:
I1207 22:52:10.153015  432959 out.go:360] Setting OutFile to fd 1 ...
I1207 22:52:10.153277  432959 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:10.153288  432959 out.go:374] Setting ErrFile to fd 2...
I1207 22:52:10.153292  432959 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:10.153556  432959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
I1207 22:52:10.154258  432959 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1207 22:52:10.154362  432959 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1207 22:52:10.154816  432959 cli_runner.go:164] Run: docker container inspect functional-463278 --format={{.State.Status}}
I1207 22:52:10.172458  432959 ssh_runner.go:195] Run: systemctl --version
I1207 22:52:10.172500  432959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463278
I1207 22:52:10.189304  432959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-463278/id_rsa Username:docker}
I1207 22:52:10.281224  432959 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-463278 image ls --format yaml --alsologtostderr:
- id: sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "22871747"
- id: sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "27060130"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-463278
size: "2372971"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "25963482"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "22818657"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:9e81fd33a07a95f92a3dd3c4eec8a5d86b5acc4278bcea9e4f58ced5f057307f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-463278
size: "991"
- id: sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "17382272"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-463278 image ls --format yaml --alsologtostderr:
I1207 22:52:08.161682  432561 out.go:360] Setting OutFile to fd 1 ...
I1207 22:52:08.161923  432561 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:08.161931  432561 out.go:374] Setting ErrFile to fd 2...
I1207 22:52:08.161935  432561 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:08.162121  432561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
I1207 22:52:08.162820  432561 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1207 22:52:08.162968  432561 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1207 22:52:08.163491  432561 cli_runner.go:164] Run: docker container inspect functional-463278 --format={{.State.Status}}
I1207 22:52:08.180911  432561 ssh_runner.go:195] Run: systemctl --version
I1207 22:52:08.180963  432561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463278
I1207 22:52:08.198537  432561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-463278/id_rsa Username:docker}
I1207 22:52:08.288719  432561 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463278 ssh pgrep buildkitd: exit status 1 (258.61348ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image build -t localhost/my-image:functional-463278 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-463278 image build -t localhost/my-image:functional-463278 testdata/build --alsologtostderr: (1.255702894s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-463278 image build -t localhost/my-image:functional-463278 testdata/build --alsologtostderr:
I1207 22:52:08.639836  432723 out.go:360] Setting OutFile to fd 1 ...
I1207 22:52:08.639947  432723 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:08.639959  432723 out.go:374] Setting ErrFile to fd 2...
I1207 22:52:08.639965  432723 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:08.640139  432723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
I1207 22:52:08.640639  432723 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1207 22:52:08.641233  432723 config.go:182] Loaded profile config "functional-463278": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1207 22:52:08.641770  432723 cli_runner.go:164] Run: docker container inspect functional-463278 --format={{.State.Status}}
I1207 22:52:08.659286  432723 ssh_runner.go:195] Run: systemctl --version
I1207 22:52:08.659338  432723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-463278
I1207 22:52:08.675483  432723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-463278/id_rsa Username:docker}
I1207 22:52:08.769965  432723 build_images.go:162] Building image from path: /tmp/build.1556432883.tar
I1207 22:52:08.770053  432723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1207 22:52:08.777766  432723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1556432883.tar
I1207 22:52:08.781305  432723 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1556432883.tar: stat -c "%s %y" /var/lib/minikube/build/build.1556432883.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1556432883.tar': No such file or directory
I1207 22:52:08.781334  432723 ssh_runner.go:362] scp /tmp/build.1556432883.tar --> /var/lib/minikube/build/build.1556432883.tar (3072 bytes)
I1207 22:52:08.798019  432723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1556432883
I1207 22:52:08.805018  432723 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1556432883 -xf /var/lib/minikube/build/build.1556432883.tar
I1207 22:52:08.812497  432723 containerd.go:394] Building image: /var/lib/minikube/build/build.1556432883
I1207 22:52:08.812574  432723 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1556432883 --local dockerfile=/var/lib/minikube/build/build.1556432883 --output type=image,name=localhost/my-image:functional-463278
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:4d36d385a092507f6ec1f2a2ad3f3f7f660132e6f090d168e410dde227abd08c done
#8 exporting config sha256:1942c284ca2fde915c169855727208779257b2935bc8051b020056b974196b86 done
#8 naming to localhost/my-image:functional-463278 done
#8 DONE 0.1s
I1207 22:52:09.813618  432723 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1556432883 --local dockerfile=/var/lib/minikube/build/build.1556432883 --output type=image,name=localhost/my-image:functional-463278: (1.001014479s)
I1207 22:52:09.813706  432723 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1556432883
I1207 22:52:09.821905  432723 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1556432883.tar
I1207 22:52:09.829071  432723 build_images.go:218] Built localhost/my-image:functional-463278 from /tmp/build.1556432883.tar
I1207 22:52:09.829102  432723 build_images.go:134] succeeded building to: functional-463278
I1207 22:52:09.829107  432723 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.019754617s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-463278
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image load --daemon kicbase/echo-server:functional-463278 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image load --daemon kicbase/echo-server:functional-463278 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-463278
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image load --daemon kicbase/echo-server:functional-463278 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-463278 /tmp/TestFunctionalparallelMountCmdspecific-port2260166731/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463278 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (288.578484ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 22:46:00.664747  375863 retry.go:31] will retry after 530.920115ms: exit status 1
E1207 22:46:00.681673  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-463278 /tmp/TestFunctionalparallelMountCmdspecific-port2260166731/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463278 ssh "sudo umount -f /mount-9p": exit status 1 (274.280861ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-463278 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-463278 /tmp/TestFunctionalparallelMountCmdspecific-port2260166731/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image save kicbase/echo-server:functional-463278 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image rm kicbase/echo-server:functional-463278 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-463278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup399378081/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-463278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup399378081/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-463278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup399378081/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-463278 ssh "findmnt -T" /mount1: exit status 1 (369.966523ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 22:46:02.608557  375863 retry.go:31] will retry after 492.785352ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-463278 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-463278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup399378081/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-463278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup399378081/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-463278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup399378081/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-463278
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 image save --daemon kicbase/echo-server:functional-463278 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-463278
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-463278 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-463278 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-463278 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-463278 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 427436: os: process already finished
helpers_test.go:519: unable to terminate pid 427135: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-463278 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-463278 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-463278 service list: (1.69142648s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-463278 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-463278 service list -o json: (1.690663894s)
functional_test.go:1504: Took "1.690756434s" to run "out/minikube-linux-amd64 -p functional-463278 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-463278
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-463278
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-463278
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22054-372312/.minikube/files/etc/test/nested/copy/375863/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (36.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-372098 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-372098 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (36.877949557s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (36.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (5.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1207 23:02:38.998291  375863 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-372098 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-372098 --alsologtostderr -v=8: (5.586740115s)
functional_test.go:678: soft start took 5.58711186s for "functional-372098" cluster.
I1207 23:02:44.585383  375863 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (5.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-372098 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach853971823/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 cache add minikube-local-cache-test:functional-372098
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 cache delete minikube-local-cache-test:functional-372098
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-372098
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-372098 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (271.250532ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 kubectl -- --context functional-372098 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-372098 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (39.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-372098 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1207 23:03:16.815423  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-372098 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.672694281s)
functional_test.go:776: restart took 39.672811193s for "functional-372098" cluster.
I1207 23:03:30.195468  375863 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (39.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-372098 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-372098 logs: (1.125493921s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3692976367/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-372098 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3692976367/001/logs.txt: (1.150151236s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-372098 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-372098
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-372098: exit status 115 (327.6511ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32603 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-372098 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-372098 config get cpus: exit status 14 (98.182498ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-372098 config get cpus: exit status 14 (91.994196ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-372098 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-372098 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (163.57343ms)

                                                
                                                
-- stdout --
	* [functional-372098] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:10:02.302637  457681 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:10:02.302753  457681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:10:02.302762  457681 out.go:374] Setting ErrFile to fd 2...
	I1207 23:10:02.302767  457681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:10:02.302939  457681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 23:10:02.303364  457681 out.go:368] Setting JSON to false
	I1207 23:10:02.304329  457681 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6751,"bootTime":1765142251,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:10:02.304383  457681 start.go:143] virtualization: kvm guest
	I1207 23:10:02.306569  457681 out.go:179] * [functional-372098] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:10:02.307904  457681 notify.go:221] Checking for updates...
	I1207 23:10:02.307918  457681 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:10:02.309230  457681 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:10:02.310743  457681 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 23:10:02.311984  457681 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 23:10:02.316171  457681 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:10:02.317326  457681 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:10:02.318924  457681 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1207 23:10:02.319630  457681 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:10:02.344206  457681 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:10:02.344313  457681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:10:02.398114  457681 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-07 23:10:02.388320661 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:10:02.398229  457681 docker.go:319] overlay module found
	I1207 23:10:02.399946  457681 out.go:179] * Using the docker driver based on existing profile
	I1207 23:10:02.401131  457681 start.go:309] selected driver: docker
	I1207 23:10:02.401144  457681 start.go:927] validating driver "docker" against &{Name:functional-372098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-372098 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:10:02.401271  457681 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:10:02.402902  457681 out.go:203] 
	W1207 23:10:02.404056  457681 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1207 23:10:02.405102  457681 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-372098 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-372098 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-372098 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (157.588902ms)

                                                
                                                
-- stdout --
	* [functional-372098] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:10:02.675202  457901 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:10:02.675303  457901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:10:02.675311  457901 out.go:374] Setting ErrFile to fd 2...
	I1207 23:10:02.675315  457901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:10:02.675574  457901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 23:10:02.675985  457901 out.go:368] Setting JSON to false
	I1207 23:10:02.676932  457901 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6752,"bootTime":1765142251,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:10:02.676985  457901 start.go:143] virtualization: kvm guest
	I1207 23:10:02.678751  457901 out.go:179] * [functional-372098] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1207 23:10:02.679921  457901 notify.go:221] Checking for updates...
	I1207 23:10:02.679967  457901 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:10:02.681267  457901 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:10:02.682437  457901 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 23:10:02.683434  457901 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 23:10:02.684504  457901 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:10:02.685516  457901 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:10:02.687010  457901 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1207 23:10:02.687493  457901 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:10:02.711459  457901 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:10:02.711573  457901 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:10:02.764536  457901 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-07 23:10:02.755142825 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:10:02.764648  457901 docker.go:319] overlay module found
	I1207 23:10:02.766932  457901 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1207 23:10:02.767937  457901 start.go:309] selected driver: docker
	I1207 23:10:02.767949  457901 start.go:927] validating driver "docker" against &{Name:functional-372098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-372098 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:10:02.768026  457901 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:10:02.769497  457901 out.go:203] 
	W1207 23:10:02.770495  457901 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1207 23:10:02.771545  457901 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh -n functional-372098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 cp functional-372098:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1318519715/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh -n functional-372098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh -n functional-372098 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/375863/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "sudo cat /etc/test/nested/copy/375863/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/375863.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "sudo cat /etc/ssl/certs/375863.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/375863.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "sudo cat /usr/share/ca-certificates/375863.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3758632.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "sudo cat /etc/ssl/certs/3758632.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3758632.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "sudo cat /usr/share/ca-certificates/3758632.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-372098 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-372098 ssh "sudo systemctl is-active docker": exit status 1 (327.44096ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-372098 ssh "sudo systemctl is-active crio": exit status 1 (329.05986ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-372098 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-372098
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-372098
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-372098 image ls --format short --alsologtostderr:
I1207 23:10:06.935224  459494 out.go:360] Setting OutFile to fd 1 ...
I1207 23:10:06.935488  459494 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:10:06.935497  459494 out.go:374] Setting ErrFile to fd 2...
I1207 23:10:06.935504  459494 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:10:06.935731  459494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
I1207 23:10:06.936270  459494 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1207 23:10:06.936393  459494 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1207 23:10:06.936853  459494 cli_runner.go:164] Run: docker container inspect functional-372098 --format={{.State.Status}}
I1207 23:10:06.954390  459494 ssh_runner.go:195] Run: systemctl --version
I1207 23:10:06.954440  459494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
I1207 23:10:06.970796  459494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-372098/id_rsa Username:docker}
I1207 23:10:07.061011  459494 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-372098 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/library/minikube-local-cache-test │ functional-372098  │ sha256:9e81fd │ 991B   │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:aa9d02 │ 27.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:8a4ded │ 25.8MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:aa5e3e │ 23.6MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:a3e246 │ 22.9MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/kicbase/echo-server               │ functional-372098  │ sha256:9056ab │ 2.37MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ localhost/my-image                          │ functional-372098  │ sha256:388c80 │ 775kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:45f3cc │ 23.1MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:7bb621 │ 17.2MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-372098 image ls --format table --alsologtostderr:
I1207 23:10:09.382246  460013 out.go:360] Setting OutFile to fd 1 ...
I1207 23:10:09.382476  460013 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:10:09.382484  460013 out.go:374] Setting ErrFile to fd 2...
I1207 23:10:09.382488  460013 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:10:09.382661  460013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
I1207 23:10:09.383236  460013 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1207 23:10:09.383328  460013 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1207 23:10:09.384451  460013 cli_runner.go:164] Run: docker container inspect functional-372098 --format={{.State.Status}}
I1207 23:10:09.401939  460013 ssh_runner.go:195] Run: systemctl --version
I1207 23:10:09.401980  460013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
I1207 23:10:09.417476  460013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-372098/id_rsa Username:docker}
I1207 23:10:09.508848  460013 ssh_runner.go:195] Run: sudo crictl images --output json
E1207 23:10:54.192171  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:11:21.893083  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:13:16.814980  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-372098 image ls --format json --alsologtostderr:
[{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:388c80b9975ebbb4cf3f31c66e97157d1acc86ef1cccaef227f41a7e1c9f3ac0","repoDigests":[],"repoTags":["localhost/my-image:functional-372098"],"size":"774886"},{"id":"sha256:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size"
:"23121143"},{"id":"sha256:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"17228488"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-372098"],"size":"2372971"},{"id":"sha256:9e81fd33a07a95f92a3dd3c4eec8a5d86b5acc4278bcea9e4f58ced5f057307f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-372098"],"size":"991"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags
":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"23553139"},{"id":"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"22871747"},{"id":"sha256:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2c
fe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"27671920"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"25786942"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-372098 image ls --format json --alsologtostderr:
I1207 23:10:09.162827  459958 out.go:360] Setting OutFile to fd 1 ...
I1207 23:10:09.162932  459958 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:10:09.162940  459958 out.go:374] Setting ErrFile to fd 2...
I1207 23:10:09.162947  459958 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:10:09.163162  459958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
I1207 23:10:09.163686  459958 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1207 23:10:09.163800  459958 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1207 23:10:09.164182  459958 cli_runner.go:164] Run: docker container inspect functional-372098 --format={{.State.Status}}
I1207 23:10:09.181800  459958 ssh_runner.go:195] Run: systemctl --version
I1207 23:10:09.181851  459958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
I1207 23:10:09.198569  459958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-372098/id_rsa Username:docker}
I1207 23:10:09.288788  459958 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-372098 image ls --format yaml --alsologtostderr:
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:9e81fd33a07a95f92a3dd3c4eec8a5d86b5acc4278bcea9e4f58ced5f057307f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-372098
size: "991"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "23553139"
- id: sha256:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "17228488"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "22871747"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-372098
size: "2372971"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "27671920"
- id: sha256:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "23121143"
- id: sha256:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "25786942"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-372098 image ls --format yaml --alsologtostderr:
I1207 23:10:07.157427  459552 out.go:360] Setting OutFile to fd 1 ...
I1207 23:10:07.157668  459552 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:10:07.157676  459552 out.go:374] Setting ErrFile to fd 2...
I1207 23:10:07.157680  459552 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:10:07.157860  459552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
I1207 23:10:07.158491  459552 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1207 23:10:07.158600  459552 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1207 23:10:07.159046  459552 cli_runner.go:164] Run: docker container inspect functional-372098 --format={{.State.Status}}
I1207 23:10:07.176739  459552 ssh_runner.go:195] Run: systemctl --version
I1207 23:10:07.176782  459552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
I1207 23:10:07.194136  459552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-372098/id_rsa Username:docker}
I1207 23:10:07.284611  459552 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (1.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-372098 ssh pgrep buildkitd: exit status 1 (258.540377ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image build -t localhost/my-image:functional-372098 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-372098 image build -t localhost/my-image:functional-372098 testdata/build --alsologtostderr: (1.254791679s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-372098 image build -t localhost/my-image:functional-372098 testdata/build --alsologtostderr:
I1207 23:10:07.637919  459727 out.go:360] Setting OutFile to fd 1 ...
I1207 23:10:07.638045  459727 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:10:07.638055  459727 out.go:374] Setting ErrFile to fd 2...
I1207 23:10:07.638062  459727 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:10:07.638262  459727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
I1207 23:10:07.638849  459727 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1207 23:10:07.639436  459727 config.go:182] Loaded profile config "functional-372098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1207 23:10:07.639898  459727 cli_runner.go:164] Run: docker container inspect functional-372098 --format={{.State.Status}}
I1207 23:10:07.657382  459727 ssh_runner.go:195] Run: systemctl --version
I1207 23:10:07.657436  459727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-372098
I1207 23:10:07.673212  459727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/functional-372098/id_rsa Username:docker}
I1207 23:10:07.763931  459727 build_images.go:162] Building image from path: /tmp/build.130858100.tar
I1207 23:10:07.763986  459727 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1207 23:10:07.771711  459727 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.130858100.tar
I1207 23:10:07.775281  459727 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.130858100.tar: stat -c "%s %y" /var/lib/minikube/build/build.130858100.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.130858100.tar': No such file or directory
I1207 23:10:07.775304  459727 ssh_runner.go:362] scp /tmp/build.130858100.tar --> /var/lib/minikube/build/build.130858100.tar (3072 bytes)
I1207 23:10:07.793017  459727 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.130858100
I1207 23:10:07.800238  459727 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.130858100 -xf /var/lib/minikube/build/build.130858100.tar
I1207 23:10:07.807852  459727 containerd.go:394] Building image: /var/lib/minikube/build/build.130858100
I1207 23:10:07.807924  459727 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.130858100 --local dockerfile=/var/lib/minikube/build/build.130858100 --output type=image,name=localhost/my-image:functional-372098
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:5b9d719464dab0edbab7d45eb752570213842cb28a0eaebeab53eee4b59e30d1 done
#8 exporting config sha256:388c80b9975ebbb4cf3f31c66e97157d1acc86ef1cccaef227f41a7e1c9f3ac0 done
#8 naming to localhost/my-image:functional-372098
#8 naming to localhost/my-image:functional-372098 done
#8 DONE 0.1s
I1207 23:10:08.806986  459727 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.130858100
I1207 23:10:08.815192  459727 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.130858100.tar
I1207 23:10:08.822672  459727 build_images.go:218] Built localhost/my-image:functional-372098 from /tmp/build.130858100.tar
I1207 23:10:08.822711  459727 build_images.go:134] succeeded building to: functional-372098
I1207 23:10:08.822717  459727 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (1.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-372098
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image load --daemon kicbase/echo-server:functional-372098 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-372098 image load --daemon kicbase/echo-server:functional-372098 --alsologtostderr: (1.030311291s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image load --daemon kicbase/echo-server:functional-372098 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-372098 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-372098 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-372098 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 450224: os: process already finished
helpers_test.go:519: unable to terminate pid 450047: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-372098 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-372098 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-372098
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image load --daemon kicbase/echo-server:functional-372098 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image save kicbase/echo-server:functional-372098 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image rm kicbase/echo-server:functional-372098 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-372098
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 image save --daemon kicbase/echo-server:functional-372098 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-372098
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-372098 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "321.366009ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.56674ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "325.44258ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "65.030035ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (5.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3701926629/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765148993199053283" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3701926629/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765148993199053283" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3701926629/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765148993199053283" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3701926629/001/test-1765148993199053283
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-372098 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (268.355213ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 23:09:53.467723  375863 retry.go:31] will retry after 358.014777ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  7 23:09 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  7 23:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  7 23:09 test-1765148993199053283
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh cat /mount-9p/test-1765148993199053283
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-372098 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [8f6758b3-49ad-4564-b784-73ddc4f61381] Pending
helpers_test.go:352: "busybox-mount" [8f6758b3-49ad-4564-b784-73ddc4f61381] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [8f6758b3-49ad-4564-b784-73ddc4f61381] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [8f6758b3-49ad-4564-b784-73ddc4f61381] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003296335s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-372098 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3701926629/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (5.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo181578777/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-372098 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (270.427408ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 23:09:58.981975  375863 retry.go:31] will retry after 345.338796ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo181578777/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-372098 ssh "sudo umount -f /mount-9p": exit status 1 (258.13787ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-372098 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo181578777/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2517702564/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2517702564/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2517702564/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-372098 ssh "findmnt -T" /mount1: exit status 1 (323.311833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 23:10:00.658515  375863 retry.go:31] will retry after 723.82001ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-372098 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2517702564/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2517702564/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-372098 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2517702564/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-372098 service list: (1.716946003s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-372098 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-372098 service list -o json: (1.693694307s)
functional_test.go:1504: Took "1.693797509s" to run "out/minikube-linux-amd64 -p functional-372098 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-372098
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-372098
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-372098
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (139.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1207 23:20:54.191924  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-683468 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m19.040704433s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (139.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-683468 kubectl -- rollout status deployment/busybox: (1.809222925s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- exec busybox-7b57f96db7-2zlfs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- exec busybox-7b57f96db7-mq5ns -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- exec busybox-7b57f96db7-vrcbc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- exec busybox-7b57f96db7-2zlfs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- exec busybox-7b57f96db7-mq5ns -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- exec busybox-7b57f96db7-vrcbc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- exec busybox-7b57f96db7-2zlfs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- exec busybox-7b57f96db7-mq5ns -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- exec busybox-7b57f96db7-vrcbc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- exec busybox-7b57f96db7-2zlfs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- exec busybox-7b57f96db7-2zlfs -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- exec busybox-7b57f96db7-mq5ns -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- exec busybox-7b57f96db7-mq5ns -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- exec busybox-7b57f96db7-vrcbc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 kubectl -- exec busybox-7b57f96db7-vrcbc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-683468 node add --alsologtostderr -v 5: (25.104325092s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-683468 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp testdata/cp-test.txt ha-683468:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile778272642/001/cp-test_ha-683468.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468 "sudo cat /home/docker/cp-test.txt"
E1207 23:22:17.254945  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468:/home/docker/cp-test.txt ha-683468-m02:/home/docker/cp-test_ha-683468_ha-683468-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m02 "sudo cat /home/docker/cp-test_ha-683468_ha-683468-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468:/home/docker/cp-test.txt ha-683468-m03:/home/docker/cp-test_ha-683468_ha-683468-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m03 "sudo cat /home/docker/cp-test_ha-683468_ha-683468-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468:/home/docker/cp-test.txt ha-683468-m04:/home/docker/cp-test_ha-683468_ha-683468-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m04 "sudo cat /home/docker/cp-test_ha-683468_ha-683468-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp testdata/cp-test.txt ha-683468-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile778272642/001/cp-test_ha-683468-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468-m02:/home/docker/cp-test.txt ha-683468:/home/docker/cp-test_ha-683468-m02_ha-683468.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468 "sudo cat /home/docker/cp-test_ha-683468-m02_ha-683468.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468-m02:/home/docker/cp-test.txt ha-683468-m03:/home/docker/cp-test_ha-683468-m02_ha-683468-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m03 "sudo cat /home/docker/cp-test_ha-683468-m02_ha-683468-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468-m02:/home/docker/cp-test.txt ha-683468-m04:/home/docker/cp-test_ha-683468-m02_ha-683468-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m04 "sudo cat /home/docker/cp-test_ha-683468-m02_ha-683468-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp testdata/cp-test.txt ha-683468-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile778272642/001/cp-test_ha-683468-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468-m03:/home/docker/cp-test.txt ha-683468:/home/docker/cp-test_ha-683468-m03_ha-683468.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468 "sudo cat /home/docker/cp-test_ha-683468-m03_ha-683468.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468-m03:/home/docker/cp-test.txt ha-683468-m02:/home/docker/cp-test_ha-683468-m03_ha-683468-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m02 "sudo cat /home/docker/cp-test_ha-683468-m03_ha-683468-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468-m03:/home/docker/cp-test.txt ha-683468-m04:/home/docker/cp-test_ha-683468-m03_ha-683468-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m04 "sudo cat /home/docker/cp-test_ha-683468-m03_ha-683468-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp testdata/cp-test.txt ha-683468-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile778272642/001/cp-test_ha-683468-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468-m04:/home/docker/cp-test.txt ha-683468:/home/docker/cp-test_ha-683468-m04_ha-683468.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468 "sudo cat /home/docker/cp-test_ha-683468-m04_ha-683468.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468-m04:/home/docker/cp-test.txt ha-683468-m02:/home/docker/cp-test_ha-683468-m04_ha-683468-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m02 "sudo cat /home/docker/cp-test_ha-683468-m04_ha-683468-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 cp ha-683468-m04:/home/docker/cp-test.txt ha-683468-m03:/home/docker/cp-test_ha-683468-m04_ha-683468-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 ssh -n ha-683468-m03 "sudo cat /home/docker/cp-test_ha-683468-m04_ha-683468-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-683468 node stop m02 --alsologtostderr -v 5: (12.013144421s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683468 status --alsologtostderr -v 5: exit status 7 (665.552234ms)

                                                
                                                
-- stdout --
	ha-683468
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683468-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-683468-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683468-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:22:44.540995  486290 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:22:44.541232  486290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:22:44.541240  486290 out.go:374] Setting ErrFile to fd 2...
	I1207 23:22:44.541244  486290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:22:44.541463  486290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 23:22:44.541647  486290 out.go:368] Setting JSON to false
	I1207 23:22:44.541671  486290 mustload.go:66] Loading cluster: ha-683468
	I1207 23:22:44.541798  486290 notify.go:221] Checking for updates...
	I1207 23:22:44.542042  486290 config.go:182] Loaded profile config "ha-683468": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 23:22:44.542057  486290 status.go:174] checking status of ha-683468 ...
	I1207 23:22:44.542491  486290 cli_runner.go:164] Run: docker container inspect ha-683468 --format={{.State.Status}}
	I1207 23:22:44.562060  486290 status.go:371] ha-683468 host status = "Running" (err=<nil>)
	I1207 23:22:44.562089  486290 host.go:66] Checking if "ha-683468" exists ...
	I1207 23:22:44.562324  486290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-683468
	I1207 23:22:44.580729  486290 host.go:66] Checking if "ha-683468" exists ...
	I1207 23:22:44.581016  486290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:22:44.581054  486290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-683468
	I1207 23:22:44.597890  486290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/ha-683468/id_rsa Username:docker}
	I1207 23:22:44.687563  486290 ssh_runner.go:195] Run: systemctl --version
	I1207 23:22:44.693512  486290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:22:44.705185  486290 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:22:44.759209  486290 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-07 23:22:44.749400114 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:22:44.759850  486290 kubeconfig.go:125] found "ha-683468" server: "https://192.168.49.254:8443"
	I1207 23:22:44.759889  486290 api_server.go:166] Checking apiserver status ...
	I1207 23:22:44.759934  486290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:22:44.772083  486290 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup
	W1207 23:22:44.780058  486290 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:22:44.780116  486290 ssh_runner.go:195] Run: ls
	I1207 23:22:44.783773  486290 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1207 23:22:44.787802  486290 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1207 23:22:44.787824  486290 status.go:463] ha-683468 apiserver status = Running (err=<nil>)
	I1207 23:22:44.787835  486290 status.go:176] ha-683468 status: &{Name:ha-683468 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:22:44.787852  486290 status.go:174] checking status of ha-683468-m02 ...
	I1207 23:22:44.788083  486290 cli_runner.go:164] Run: docker container inspect ha-683468-m02 --format={{.State.Status}}
	I1207 23:22:44.806772  486290 status.go:371] ha-683468-m02 host status = "Stopped" (err=<nil>)
	I1207 23:22:44.806790  486290 status.go:384] host is not running, skipping remaining checks
	I1207 23:22:44.806796  486290 status.go:176] ha-683468-m02 status: &{Name:ha-683468-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:22:44.806814  486290 status.go:174] checking status of ha-683468-m03 ...
	I1207 23:22:44.807047  486290 cli_runner.go:164] Run: docker container inspect ha-683468-m03 --format={{.State.Status}}
	I1207 23:22:44.824774  486290 status.go:371] ha-683468-m03 host status = "Running" (err=<nil>)
	I1207 23:22:44.824792  486290 host.go:66] Checking if "ha-683468-m03" exists ...
	I1207 23:22:44.825024  486290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-683468-m03
	I1207 23:22:44.842415  486290 host.go:66] Checking if "ha-683468-m03" exists ...
	I1207 23:22:44.842672  486290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:22:44.842731  486290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-683468-m03
	I1207 23:22:44.860071  486290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/ha-683468-m03/id_rsa Username:docker}
	I1207 23:22:44.949581  486290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:22:44.961729  486290 kubeconfig.go:125] found "ha-683468" server: "https://192.168.49.254:8443"
	I1207 23:22:44.961755  486290 api_server.go:166] Checking apiserver status ...
	I1207 23:22:44.961785  486290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:22:44.972291  486290 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1307/cgroup
	W1207 23:22:44.980010  486290 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1307/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:22:44.980069  486290 ssh_runner.go:195] Run: ls
	I1207 23:22:44.984080  486290 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1207 23:22:44.988122  486290 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1207 23:22:44.988144  486290 status.go:463] ha-683468-m03 apiserver status = Running (err=<nil>)
	I1207 23:22:44.988153  486290 status.go:176] ha-683468-m03 status: &{Name:ha-683468-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:22:44.988184  486290 status.go:174] checking status of ha-683468-m04 ...
	I1207 23:22:44.988478  486290 cli_runner.go:164] Run: docker container inspect ha-683468-m04 --format={{.State.Status}}
	I1207 23:22:45.006077  486290 status.go:371] ha-683468-m04 host status = "Running" (err=<nil>)
	I1207 23:22:45.006095  486290 host.go:66] Checking if "ha-683468-m04" exists ...
	I1207 23:22:45.006308  486290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-683468-m04
	I1207 23:22:45.023756  486290 host.go:66] Checking if "ha-683468-m04" exists ...
	I1207 23:22:45.023988  486290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:22:45.024030  486290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-683468-m04
	I1207 23:22:45.040830  486290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/ha-683468-m04/id_rsa Username:docker}
	I1207 23:22:45.131729  486290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:22:45.144223  486290 status.go:176] ha-683468-m04 status: &{Name:ha-683468-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-683468 node start m02 --alsologtostderr -v 5: (7.576716833s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (89.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 stop --alsologtostderr -v 5
E1207 23:23:16.814956  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-683468 stop --alsologtostderr -v 5: (37.133704847s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 start --wait true --alsologtostderr -v 5
E1207 23:23:38.218498  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:23:38.224935  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:23:38.236277  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:23:38.257635  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:23:38.299041  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:23:38.381369  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:23:38.542965  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:23:38.865270  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:23:39.507822  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:23:40.789182  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:23:43.350680  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:23:48.474165  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:23:58.715592  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:24:19.196896  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-683468 start --wait true --alsologtostderr -v 5: (52.705018337s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (89.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-683468 node delete m03 --alsologtostderr -v 5: (8.418901338s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 stop --alsologtostderr -v 5
E1207 23:25:00.158282  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-683468 stop --alsologtostderr -v 5: (35.890578436s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683468 status --alsologtostderr -v 5: exit status 7 (114.593933ms)

                                                
                                                
-- stdout --
	ha-683468
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-683468-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-683468-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:25:10.952299  502429 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:25:10.952386  502429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:25:10.952395  502429 out.go:374] Setting ErrFile to fd 2...
	I1207 23:25:10.952399  502429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:25:10.952619  502429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 23:25:10.952799  502429 out.go:368] Setting JSON to false
	I1207 23:25:10.952823  502429 mustload.go:66] Loading cluster: ha-683468
	I1207 23:25:10.952950  502429 notify.go:221] Checking for updates...
	I1207 23:25:10.953773  502429 config.go:182] Loaded profile config "ha-683468": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 23:25:10.953809  502429 status.go:174] checking status of ha-683468 ...
	I1207 23:25:10.954735  502429 cli_runner.go:164] Run: docker container inspect ha-683468 --format={{.State.Status}}
	I1207 23:25:10.971839  502429 status.go:371] ha-683468 host status = "Stopped" (err=<nil>)
	I1207 23:25:10.971860  502429 status.go:384] host is not running, skipping remaining checks
	I1207 23:25:10.971871  502429 status.go:176] ha-683468 status: &{Name:ha-683468 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:25:10.972057  502429 status.go:174] checking status of ha-683468-m02 ...
	I1207 23:25:10.972357  502429 cli_runner.go:164] Run: docker container inspect ha-683468-m02 --format={{.State.Status}}
	I1207 23:25:10.988413  502429 status.go:371] ha-683468-m02 host status = "Stopped" (err=<nil>)
	I1207 23:25:10.988429  502429 status.go:384] host is not running, skipping remaining checks
	I1207 23:25:10.988435  502429 status.go:176] ha-683468-m02 status: &{Name:ha-683468-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:25:10.988461  502429 status.go:174] checking status of ha-683468-m04 ...
	I1207 23:25:10.988744  502429 cli_runner.go:164] Run: docker container inspect ha-683468-m04 --format={{.State.Status}}
	I1207 23:25:11.006409  502429 status.go:371] ha-683468-m04 host status = "Stopped" (err=<nil>)
	I1207 23:25:11.006424  502429 status.go:384] host is not running, skipping remaining checks
	I1207 23:25:11.006429  502429 status.go:176] ha-683468-m04 status: &{Name:ha-683468-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1207 23:25:54.191556  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-683468 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (56.525018342s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (48.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 node add --control-plane --alsologtostderr -v 5
E1207 23:26:22.079652  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-683468 node add --control-plane --alsologtostderr -v 5: (48.004784901s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-683468 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (48.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (36.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-677640 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-677640 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (36.390835536s)
--- PASS: TestJSONOutput/start/Command (36.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-677640 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-677640 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-677640 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-677640 --output=json --user=testUser: (5.85177953s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-034540 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-034540 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.731972ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"33797a12-e440-40b5-8bad-89e6e9bea590","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-034540] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"28af564c-0bce-443c-8b3a-29784937d72c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22054"}}
	{"specversion":"1.0","id":"afa28893-7d1e-4429-98c4-37a9359ac102","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a2c6c59b-112d-41ad-9470-1843ee3aa6e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig"}}
	{"specversion":"1.0","id":"3be781a4-3d1a-4a13-a941-900296547c85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube"}}
	{"specversion":"1.0","id":"f9d2f52b-d392-4c93-bc95-3fac50832b53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"03619974-8f02-4190-8752-3b9496c5145c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7876e8ad-02c8-413e-831d-e7a4106d45e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-034540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-034540
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-866465 --network=
E1207 23:28:16.814320  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-866465 --network=: (26.494175434s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-866465" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-866465
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-866465: (2.109812147s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.62s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.4s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-536795 --network=bridge
E1207 23:28:38.218951  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-536795 --network=bridge: (22.418114021s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-536795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-536795
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-536795: (1.96522106s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.40s)

                                                
                                    
x
+
TestKicExistingNetwork (23.09s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1207 23:28:48.025382  375863 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1207 23:28:48.041459  375863 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1207 23:28:48.041534  375863 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1207 23:28:48.041551  375863 cli_runner.go:164] Run: docker network inspect existing-network
W1207 23:28:48.057527  375863 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1207 23:28:48.057565  375863 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1207 23:28:48.057583  375863 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1207 23:28:48.057738  375863 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1207 23:28:48.074368  375863 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-53731e9b0be0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:23:1a:1f:52:60} reservation:<nil>}
I1207 23:28:48.074762  375863 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00061b630}
I1207 23:28:48.074805  375863 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1207 23:28:48.074861  375863 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1207 23:28:48.118177  375863 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-202472 --network=existing-network
E1207 23:29:05.921012  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-202472 --network=existing-network: (20.988645404s)
helpers_test.go:175: Cleaning up "existing-network-202472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-202472
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-202472: (1.974607467s)
I1207 23:29:11.097283  375863 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.09s)

                                                
                                    
x
+
TestKicCustomSubnet (25.91s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-002171 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-002171 --subnet=192.168.60.0/24: (23.791764699s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-002171 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-002171" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-002171
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-002171: (2.097829381s)
--- PASS: TestKicCustomSubnet (25.91s)

                                                
                                    
x
+
TestKicStaticIP (25.98s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-906225 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-906225 --static-ip=192.168.200.200: (23.710636891s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-906225 ip
helpers_test.go:175: Cleaning up "static-ip-906225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-906225
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-906225: (2.112252507s)
--- PASS: TestKicStaticIP (25.98s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (47.95s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-579508 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-579508 --driver=docker  --container-runtime=containerd: (20.570915427s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-581470 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-581470 --driver=docker  --container-runtime=containerd: (21.528448725s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-579508
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-581470
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-581470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-581470
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-581470: (2.310696946s)
helpers_test.go:175: Cleaning up "first-579508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-579508
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-579508: (2.351726399s)
--- PASS: TestMinikubeProfile (47.95s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-816338 --memory=3072 --mount-string /tmp/TestMountStartserial148790282/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E1207 23:30:54.191624  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-816338 --memory=3072 --mount-string /tmp/TestMountStartserial148790282/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.18005417s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-816338 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-832928 --memory=3072 --mount-string /tmp/TestMountStartserial148790282/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-832928 --memory=3072 --mount-string /tmp/TestMountStartserial148790282/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.194861187s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-832928 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-816338 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-816338 --alsologtostderr -v=5: (1.646244679s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-832928 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-832928
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-832928: (1.253094484s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-832928
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-832928: (5.753228646s)
--- PASS: TestMountStart/serial/RestartStopped (6.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-832928 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (60.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-989065 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-989065 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m0.429448575s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (60.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989065 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989065 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-989065 -- rollout status deployment/busybox: (1.893837839s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989065 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989065 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989065 -- exec busybox-7b57f96db7-qwkg2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989065 -- exec busybox-7b57f96db7-tpwc7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989065 -- exec busybox-7b57f96db7-qwkg2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989065 -- exec busybox-7b57f96db7-tpwc7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989065 -- exec busybox-7b57f96db7-qwkg2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989065 -- exec busybox-7b57f96db7-tpwc7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989065 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989065 -- exec busybox-7b57f96db7-qwkg2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989065 -- exec busybox-7b57f96db7-qwkg2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989065 -- exec busybox-7b57f96db7-tpwc7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989065 -- exec busybox-7b57f96db7-tpwc7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (21.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-989065 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-989065 -v=5 --alsologtostderr: (21.092538815s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (21.71s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-989065 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 cp testdata/cp-test.txt multinode-989065:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 cp multinode-989065:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile183820006/001/cp-test_multinode-989065.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 cp multinode-989065:/home/docker/cp-test.txt multinode-989065-m02:/home/docker/cp-test_multinode-989065_multinode-989065-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065-m02 "sudo cat /home/docker/cp-test_multinode-989065_multinode-989065-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 cp multinode-989065:/home/docker/cp-test.txt multinode-989065-m03:/home/docker/cp-test_multinode-989065_multinode-989065-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065-m03 "sudo cat /home/docker/cp-test_multinode-989065_multinode-989065-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 cp testdata/cp-test.txt multinode-989065-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 cp multinode-989065-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile183820006/001/cp-test_multinode-989065-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 cp multinode-989065-m02:/home/docker/cp-test.txt multinode-989065:/home/docker/cp-test_multinode-989065-m02_multinode-989065.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065 "sudo cat /home/docker/cp-test_multinode-989065-m02_multinode-989065.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 cp multinode-989065-m02:/home/docker/cp-test.txt multinode-989065-m03:/home/docker/cp-test_multinode-989065-m02_multinode-989065-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065-m03 "sudo cat /home/docker/cp-test_multinode-989065-m02_multinode-989065-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 cp testdata/cp-test.txt multinode-989065-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 cp multinode-989065-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile183820006/001/cp-test_multinode-989065-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 cp multinode-989065-m03:/home/docker/cp-test.txt multinode-989065:/home/docker/cp-test_multinode-989065-m03_multinode-989065.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065 "sudo cat /home/docker/cp-test_multinode-989065-m03_multinode-989065.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 cp multinode-989065-m03:/home/docker/cp-test.txt multinode-989065-m02:/home/docker/cp-test_multinode-989065-m03_multinode-989065-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 ssh -n multinode-989065-m02 "sudo cat /home/docker/cp-test_multinode-989065-m03_multinode-989065-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-989065 node stop m03: (1.258297538s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-989065 status: exit status 7 (489.768724ms)

                                                
                                                
-- stdout --
	multinode-989065
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-989065-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-989065-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-989065 status --alsologtostderr: exit status 7 (472.657313ms)

                                                
                                                
-- stdout --
	multinode-989065
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-989065-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-989065-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:32:51.027741  564525 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:32:51.027830  564525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:32:51.027834  564525 out.go:374] Setting ErrFile to fd 2...
	I1207 23:32:51.027838  564525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:32:51.028008  564525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 23:32:51.028170  564525 out.go:368] Setting JSON to false
	I1207 23:32:51.028196  564525 mustload.go:66] Loading cluster: multinode-989065
	I1207 23:32:51.028278  564525 notify.go:221] Checking for updates...
	I1207 23:32:51.028515  564525 config.go:182] Loaded profile config "multinode-989065": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 23:32:51.028528  564525 status.go:174] checking status of multinode-989065 ...
	I1207 23:32:51.029036  564525 cli_runner.go:164] Run: docker container inspect multinode-989065 --format={{.State.Status}}
	I1207 23:32:51.048825  564525 status.go:371] multinode-989065 host status = "Running" (err=<nil>)
	I1207 23:32:51.048861  564525 host.go:66] Checking if "multinode-989065" exists ...
	I1207 23:32:51.049087  564525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-989065
	I1207 23:32:51.066245  564525 host.go:66] Checking if "multinode-989065" exists ...
	I1207 23:32:51.066470  564525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:32:51.066520  564525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-989065
	I1207 23:32:51.083843  564525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33290 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/multinode-989065/id_rsa Username:docker}
	I1207 23:32:51.172664  564525 ssh_runner.go:195] Run: systemctl --version
	I1207 23:32:51.178679  564525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:32:51.190192  564525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:32:51.244967  564525 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-07 23:32:51.234652676 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:32:51.245505  564525 kubeconfig.go:125] found "multinode-989065" server: "https://192.168.67.2:8443"
	I1207 23:32:51.245533  564525 api_server.go:166] Checking apiserver status ...
	I1207 23:32:51.245594  564525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:32:51.257158  564525 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1314/cgroup
	W1207 23:32:51.265191  564525 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1314/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:32:51.265235  564525 ssh_runner.go:195] Run: ls
	I1207 23:32:51.268634  564525 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 23:32:51.272741  564525 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1207 23:32:51.272760  564525 status.go:463] multinode-989065 apiserver status = Running (err=<nil>)
	I1207 23:32:51.272769  564525 status.go:176] multinode-989065 status: &{Name:multinode-989065 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:32:51.272792  564525 status.go:174] checking status of multinode-989065-m02 ...
	I1207 23:32:51.273031  564525 cli_runner.go:164] Run: docker container inspect multinode-989065-m02 --format={{.State.Status}}
	I1207 23:32:51.290186  564525 status.go:371] multinode-989065-m02 host status = "Running" (err=<nil>)
	I1207 23:32:51.290203  564525 host.go:66] Checking if "multinode-989065-m02" exists ...
	I1207 23:32:51.290461  564525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-989065-m02
	I1207 23:32:51.306799  564525 host.go:66] Checking if "multinode-989065-m02" exists ...
	I1207 23:32:51.307081  564525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:32:51.307121  564525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-989065-m02
	I1207 23:32:51.323728  564525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33295 SSHKeyPath:/home/jenkins/minikube-integration/22054-372312/.minikube/machines/multinode-989065-m02/id_rsa Username:docker}
	I1207 23:32:51.412293  564525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:32:51.424363  564525 status.go:176] multinode-989065-m02 status: &{Name:multinode-989065-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:32:51.424393  564525 status.go:174] checking status of multinode-989065-m03 ...
	I1207 23:32:51.424625  564525 cli_runner.go:164] Run: docker container inspect multinode-989065-m03 --format={{.State.Status}}
	I1207 23:32:51.441402  564525 status.go:371] multinode-989065-m03 host status = "Stopped" (err=<nil>)
	I1207 23:32:51.441419  564525 status.go:384] host is not running, skipping remaining checks
	I1207 23:32:51.441424  564525 status.go:176] multinode-989065-m03 status: &{Name:multinode-989065-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-989065 node start m03 -v=5 --alsologtostderr: (6.038628442s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-989065
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-989065
E1207 23:32:59.889334  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:33:16.815972  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-989065: (24.927756278s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-989065 --wait=true -v=5 --alsologtostderr
E1207 23:33:38.218817  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-989065 --wait=true -v=5 --alsologtostderr: (48.243761034s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-989065
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-989065 node delete m03: (4.566888891s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-989065 stop: (23.752210766s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-989065 status: exit status 7 (99.958909ms)

                                                
                                                
-- stdout --
	multinode-989065
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-989065-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-989065 status --alsologtostderr: exit status 7 (93.638886ms)

                                                
                                                
-- stdout --
	multinode-989065
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-989065-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:34:40.496835  574296 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:34:40.497055  574296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:34:40.497063  574296 out.go:374] Setting ErrFile to fd 2...
	I1207 23:34:40.497067  574296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:34:40.497254  574296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 23:34:40.497405  574296 out.go:368] Setting JSON to false
	I1207 23:34:40.497428  574296 mustload.go:66] Loading cluster: multinode-989065
	I1207 23:34:40.497546  574296 notify.go:221] Checking for updates...
	I1207 23:34:40.497787  574296 config.go:182] Loaded profile config "multinode-989065": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 23:34:40.497801  574296 status.go:174] checking status of multinode-989065 ...
	I1207 23:34:40.498204  574296 cli_runner.go:164] Run: docker container inspect multinode-989065 --format={{.State.Status}}
	I1207 23:34:40.517133  574296 status.go:371] multinode-989065 host status = "Stopped" (err=<nil>)
	I1207 23:34:40.517167  574296 status.go:384] host is not running, skipping remaining checks
	I1207 23:34:40.517176  574296 status.go:176] multinode-989065 status: &{Name:multinode-989065 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:34:40.517228  574296 status.go:174] checking status of multinode-989065-m02 ...
	I1207 23:34:40.517482  574296 cli_runner.go:164] Run: docker container inspect multinode-989065-m02 --format={{.State.Status}}
	I1207 23:34:40.533949  574296 status.go:371] multinode-989065-m02 host status = "Stopped" (err=<nil>)
	I1207 23:34:40.533981  574296 status.go:384] host is not running, skipping remaining checks
	I1207 23:34:40.533995  574296 status.go:176] multinode-989065-m02 status: &{Name:multinode-989065-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-989065 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-989065 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (48.44086112s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989065 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.01s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-989065
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-989065-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-989065-m02 --driver=docker  --container-runtime=containerd: exit status 14 (74.227749ms)

                                                
                                                
-- stdout --
	* [multinode-989065-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-989065-m02' is duplicated with machine name 'multinode-989065-m02' in profile 'multinode-989065'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-989065-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-989065-m03 --driver=docker  --container-runtime=containerd: (21.064756707s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-989065
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-989065: exit status 80 (285.683166ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-989065 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-989065-m03 already exists in multinode-989065-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-989065-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-989065-m03: (1.940026818s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.43s)

                                                
                                    
x
+
TestPreload (102.26s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-444379 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
preload_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-444379 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (43.904931977s)
preload_test.go:53: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-444379 image pull gcr.io/k8s-minikube/busybox
preload_test.go:59: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-444379
preload_test.go:59: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-444379: (6.749005159s)
preload_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-444379 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:67: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-444379 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (48.120233332s)
preload_test.go:72: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-444379 image list
helpers_test.go:175: Cleaning up "test-preload-444379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-444379
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-444379: (2.362282438s)
--- PASS: TestPreload (102.26s)

                                                
                                    
x
+
TestScheduledStopUnix (97.78s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-924745 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-924745 --memory=3072 --driver=docker  --container-runtime=containerd: (21.598953558s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-924745 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1207 23:38:01.006425  592478 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:38:01.006513  592478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:38:01.006521  592478 out.go:374] Setting ErrFile to fd 2...
	I1207 23:38:01.006525  592478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:38:01.006740  592478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 23:38:01.006974  592478 out.go:368] Setting JSON to false
	I1207 23:38:01.007060  592478 mustload.go:66] Loading cluster: scheduled-stop-924745
	I1207 23:38:01.007331  592478 config.go:182] Loaded profile config "scheduled-stop-924745": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 23:38:01.007393  592478 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/config.json ...
	I1207 23:38:01.007559  592478 mustload.go:66] Loading cluster: scheduled-stop-924745
	I1207 23:38:01.007652  592478 config.go:182] Loaded profile config "scheduled-stop-924745": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-924745 -n scheduled-stop-924745
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-924745 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1207 23:38:01.379589  592629 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:38:01.379847  592629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:38:01.379856  592629 out.go:374] Setting ErrFile to fd 2...
	I1207 23:38:01.379861  592629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:38:01.380057  592629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 23:38:01.380270  592629 out.go:368] Setting JSON to false
	I1207 23:38:01.380448  592629 daemonize_unix.go:73] killing process 592512 as it is an old scheduled stop
	I1207 23:38:01.380543  592629 mustload.go:66] Loading cluster: scheduled-stop-924745
	I1207 23:38:01.380902  592629 config.go:182] Loaded profile config "scheduled-stop-924745": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 23:38:01.380977  592629 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/config.json ...
	I1207 23:38:01.381151  592629 mustload.go:66] Loading cluster: scheduled-stop-924745
	I1207 23:38:01.381235  592629 config.go:182] Loaded profile config "scheduled-stop-924745": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1207 23:38:01.386900  375863 retry.go:31] will retry after 135.272µs: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
I1207 23:38:01.388074  375863 retry.go:31] will retry after 80.855µs: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
I1207 23:38:01.389236  375863 retry.go:31] will retry after 114.354µs: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
I1207 23:38:01.390393  375863 retry.go:31] will retry after 487.362µs: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
I1207 23:38:01.391539  375863 retry.go:31] will retry after 259.676µs: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
I1207 23:38:01.392709  375863 retry.go:31] will retry after 662.61µs: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
I1207 23:38:01.393848  375863 retry.go:31] will retry after 1.644884ms: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
I1207 23:38:01.396077  375863 retry.go:31] will retry after 930.444µs: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
I1207 23:38:01.397226  375863 retry.go:31] will retry after 1.923212ms: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
I1207 23:38:01.399455  375863 retry.go:31] will retry after 3.807067ms: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
I1207 23:38:01.403658  375863 retry.go:31] will retry after 4.42822ms: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
I1207 23:38:01.408896  375863 retry.go:31] will retry after 9.740518ms: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
I1207 23:38:01.419137  375863 retry.go:31] will retry after 18.877711ms: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
I1207 23:38:01.438364  375863 retry.go:31] will retry after 22.464901ms: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
I1207 23:38:01.461586  375863 retry.go:31] will retry after 25.479956ms: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
I1207 23:38:01.487889  375863 retry.go:31] will retry after 62.476333ms: open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-924745 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1207 23:38:16.814963  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-924745 -n scheduled-stop-924745
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-924745
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-924745 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1207 23:38:27.279841  593495 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:38:27.279959  593495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:38:27.279969  593495 out.go:374] Setting ErrFile to fd 2...
	I1207 23:38:27.279975  593495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:38:27.280194  593495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 23:38:27.280438  593495 out.go:368] Setting JSON to false
	I1207 23:38:27.280538  593495 mustload.go:66] Loading cluster: scheduled-stop-924745
	I1207 23:38:27.280906  593495 config.go:182] Loaded profile config "scheduled-stop-924745": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1207 23:38:27.280991  593495 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/scheduled-stop-924745/config.json ...
	I1207 23:38:27.281199  593495 mustload.go:66] Loading cluster: scheduled-stop-924745
	I1207 23:38:27.281316  593495 config.go:182] Loaded profile config "scheduled-stop-924745": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
E1207 23:38:38.218976  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1207 23:38:57.257110  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-924745
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-924745: exit status 7 (80.385884ms)

                                                
                                                
-- stdout --
	scheduled-stop-924745
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-924745 -n scheduled-stop-924745
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-924745 -n scheduled-stop-924745: exit status 7 (78.825274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-924745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-924745
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-924745: (4.67346017s)
--- PASS: TestScheduledStopUnix (97.78s)

                                                
                                    
x
+
TestInsufficientStorage (11.19s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-243909 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-243909 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.773570813s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"36e27aff-693a-413c-854b-d531c4d9b9b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-243909] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd83ef23-3e31-47ee-b9eb-8ffbff30919a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22054"}}
	{"specversion":"1.0","id":"ad423e7a-6ebe-46f7-b032-558a495da220","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"90be0f83-4feb-42ab-ba01-db6b29cad2bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig"}}
	{"specversion":"1.0","id":"f2c8c548-9139-4eea-8114-e8a76fe2007d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube"}}
	{"specversion":"1.0","id":"b36900a4-6069-4b09-b496-0e9d102d67d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f2e67a61-b2b1-49e2-be99-3c3d1ef3a7d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"807b18c6-fb2c-440a-b6fc-995b5e096709","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b7b2c908-e2bd-4e20-b5a7-78cc1cd1e705","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"99ff0f80-aef9-433a-b2ea-280b032f40ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc1705bd-8183-49a9-8c62-cbe4d194b3b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"04aa928d-311b-478a-87e0-5b15acc7e90a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-243909\" primary control-plane node in \"insufficient-storage-243909\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9b41749a-ade6-489f-91eb-5a92a0b73e4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764843390-22032 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f038acad-9520-47b5-9cd8-0b2c7a36f9ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8c4b81c0-c350-46bc-ab41-9288297a1a5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-243909 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-243909 --output=json --layout=cluster: exit status 7 (283.862885ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-243909","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-243909","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 23:39:26.167639  595806 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-243909" does not appear in /home/jenkins/minikube-integration/22054-372312/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-243909 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-243909 --output=json --layout=cluster: exit status 7 (274.965053ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-243909","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-243909","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 23:39:26.444211  595915 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-243909" does not appear in /home/jenkins/minikube-integration/22054-372312/kubeconfig
	E1207 23:39:26.454310  595915 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/insufficient-storage-243909/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-243909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-243909
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-243909: (1.857315869s)
--- PASS: TestInsufficientStorage (11.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (291.01s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1328199987 start -p running-upgrade-315368 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1328199987 start -p running-upgrade-315368 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (20.399410433s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-315368 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1207 23:40:54.191813  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-315368 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m27.868393774s)
helpers_test.go:175: Cleaning up "running-upgrade-315368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-315368
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-315368: (2.085192224s)
--- PASS: TestRunningBinaryUpgrade (291.01s)

                                                
                                    
x
+
TestKubernetesUpgrade (329.05s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-389929 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-389929 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.562711285s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-389929
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-389929: (11.937916422s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-389929 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-389929 status --format={{.Host}}: exit status 7 (82.259484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-389929 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-389929 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m26.973917739s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-389929 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-389929 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-389929 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (85.920653ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-389929] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-389929
	    minikube start -p kubernetes-upgrade-389929 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3899292 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-389929 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-389929 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-389929 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (11.18787606s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-389929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-389929
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-389929: (2.160282593s)
--- PASS: TestKubernetesUpgrade (329.05s)

                                                
                                    
x
+
TestMissingContainerUpgrade (129.97s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.174836257 start -p missing-upgrade-178910 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.174836257 start -p missing-upgrade-178910 --memory=3072 --driver=docker  --container-runtime=containerd: (44.043627063s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-178910
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-178910
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-178910 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-178910 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m22.597118307s)
helpers_test.go:175: Cleaning up "missing-upgrade-178910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-178910
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-178910: (1.90888109s)
--- PASS: TestMissingContainerUpgrade (129.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (312.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.193603352 start -p stopped-upgrade-477061 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1207 23:40:01.282464  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.193603352 start -p stopped-upgrade-477061 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (43.883015035s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.193603352 -p stopped-upgrade-477061 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.193603352 -p stopped-upgrade-477061 stop: (1.251938884s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-477061 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-477061 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m27.573779746s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (312.71s)

                                                
                                    
x
+
TestPause/serial/Start (39.52s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-213513 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-213513 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (39.5228035s)
--- PASS: TestPause/serial/Start (39.52s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-213513 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-213513 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.464929729s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.48s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-213513 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-213513 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-213513 --output=json --layout=cluster: exit status 2 (341.418281ms)

                                                
                                                
-- stdout --
	{"Name":"pause-213513","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-213513","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-213513 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.63s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-213513 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.63s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.66s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-213513 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-213513 --alsologtostderr -v=5: (2.661926839s)
--- PASS: TestPause/serial/DeletePaused (2.66s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (42.79s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (42.73717504s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-213513
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-213513: exit status 1 (18.423606ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-213513: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (42.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-294870 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-294870 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (84.7202ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-294870] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (19.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-294870 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1207 23:43:16.814413  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-294870 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (19.609395927s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-294870 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (19.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (21.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-294870 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1207 23:43:38.219208  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-372098/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-294870 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (19.714176158s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-294870 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-294870 status -o json: exit status 2 (289.48363ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-294870","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-294870
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-294870: (1.956977091s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (21.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-294870 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-294870 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (6.370133764s)
--- PASS: TestNoKubernetes/serial/Start (6.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22054-372312/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-294870 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-294870 "sudo systemctl is-active --quiet service kubelet": exit status 1 (269.152738ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-294870
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-294870: (1.276611236s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-294870 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-294870 --driver=docker  --container-runtime=containerd: (5.908205377s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-294870 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-294870 "sudo systemctl is-active --quiet service kubelet": exit status 1 (263.816193ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-308161 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-308161 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (187.969429ms)

                                                
                                                
-- stdout --
	* [false-308161] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:44:14.053182  657441 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:44:14.053473  657441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:44:14.053488  657441 out.go:374] Setting ErrFile to fd 2...
	I1207 23:44:14.053494  657441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:44:14.053786  657441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-372312/.minikube/bin
	I1207 23:44:14.054244  657441 out.go:368] Setting JSON to false
	I1207 23:44:14.055395  657441 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8803,"bootTime":1765142251,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:44:14.055452  657441 start.go:143] virtualization: kvm guest
	I1207 23:44:14.057370  657441 out.go:179] * [false-308161] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:44:14.058766  657441 notify.go:221] Checking for updates...
	I1207 23:44:14.058771  657441 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:44:14.060292  657441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:44:14.061668  657441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-372312/kubeconfig
	I1207 23:44:14.063068  657441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-372312/.minikube
	I1207 23:44:14.070875  657441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:44:14.072060  657441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:44:14.073606  657441 config.go:182] Loaded profile config "kubernetes-upgrade-389929": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1207 23:44:14.073724  657441 config.go:182] Loaded profile config "running-upgrade-315368": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I1207 23:44:14.073827  657441 config.go:182] Loaded profile config "stopped-upgrade-477061": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I1207 23:44:14.073934  657441 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:44:14.101903  657441 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:44:14.101994  657441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:44:14.166320  657441 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-07 23:44:14.155266733 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:44:14.166481  657441 docker.go:319] overlay module found
	I1207 23:44:14.168127  657441 out.go:179] * Using the docker driver based on user configuration
	I1207 23:44:14.169247  657441 start.go:309] selected driver: docker
	I1207 23:44:14.169268  657441 start.go:927] validating driver "docker" against <nil>
	I1207 23:44:14.169283  657441 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:44:14.172002  657441 out.go:203] 
	W1207 23:44:14.173504  657441 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1207 23:44:14.174565  657441 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-308161 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-308161

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-308161

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-308161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-308161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-308161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-308161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-308161

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-308161

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-308161

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-308161

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-308161

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-308161" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-308161" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:40:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-389929
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:40:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: running-upgrade-315368
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:40:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-477061
contexts:
- context:
cluster: kubernetes-upgrade-389929
user: kubernetes-upgrade-389929
name: kubernetes-upgrade-389929
- context:
cluster: running-upgrade-315368
user: running-upgrade-315368
name: running-upgrade-315368
- context:
cluster: stopped-upgrade-477061
user: stopped-upgrade-477061
name: stopped-upgrade-477061
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-389929
user:
client-certificate: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/kubernetes-upgrade-389929/client.crt
client-key: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/kubernetes-upgrade-389929/client.key
- name: running-upgrade-315368
user:
client-certificate: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/running-upgrade-315368/client.crt
client-key: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/running-upgrade-315368/client.key
- name: stopped-upgrade-477061
user:
client-certificate: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/stopped-upgrade-477061/client.crt
client-key: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/stopped-upgrade-477061/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-308161

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308161"

                                                
                                                
----------------------- debugLogs end: false-308161 [took: 3.188708789s] --------------------------------
helpers_test.go:175: Cleaning up "false-308161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-308161
--- PASS: TestNetworkPlugins/group/false (3.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-477061
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-477061: (1.180934441s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-415725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-415725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (51.068280793s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (46.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-762411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-762411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (46.574449565s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (46.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-299948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1207 23:45:54.192263  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/functional-463278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-299948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (42.470144526s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-415725 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [df527c24-b1c4-4721-a4ac-3f6ced1e30aa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [df527c24-b1c4-4721-a4ac-3f6ced1e30aa] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003681246s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-415725 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-299948 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f0054f30-4494-4fd9-a803-6cdfc1468934] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f0054f30-4494-4fd9-a803-6cdfc1468934] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003487693s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-299948 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-762411 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [11bfb614-5b86-4575-b4fc-01da64d47464] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [11bfb614-5b86-4575-b4fc-01da64d47464] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003525573s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-762411 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-415725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-415725 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-415725 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-415725 --alsologtostderr -v=3: (11.994119908s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-299948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-299948 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-299948 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-299948 --alsologtostderr -v=3: (12.012261702s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-762411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-762411 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-762411 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-762411 --alsologtostderr -v=3: (12.029630779s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-415725 -n old-k8s-version-415725
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-415725 -n old-k8s-version-415725: exit status 7 (77.851199ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-415725 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-415725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-415725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (52.460937887s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-415725 -n old-k8s-version-415725
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299948 -n embed-certs-299948
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299948 -n embed-certs-299948: exit status 7 (77.040308ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-299948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-299948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-299948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (48.148826486s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299948 -n embed-certs-299948
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-762411 -n no-preload-762411
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-762411 -n no-preload-762411: exit status 7 (86.909836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-762411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-762411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-762411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (51.721874673s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-762411 -n no-preload-762411
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zmrk6" [de432b5b-85c1-4d3b-a832-33b260b5e999] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003112948s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5jmw2" [0f7b473c-f0e8-47d6-b911-3f6b26876410] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003892478s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zmrk6" [de432b5b-85c1-4d3b-a832-33b260b5e999] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003105904s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-299948 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-f44h4" [27041c7f-7746-4e44-8231-1ead7a7eddb7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003704587s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5jmw2" [0f7b473c-f0e8-47d6-b911-3f6b26876410] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003773413s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-415725 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-299948 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-299948 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-299948 -n embed-certs-299948
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-299948 -n embed-certs-299948: exit status 2 (316.886846ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-299948 -n embed-certs-299948
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-299948 -n embed-certs-299948: exit status 2 (325.071263ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-299948 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-299948 -n embed-certs-299948
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-299948 -n embed-certs-299948
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-415725 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-415725 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-415725 -n old-k8s-version-415725
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-415725 -n old-k8s-version-415725: exit status 2 (326.800363ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-415725 -n old-k8s-version-415725
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-415725 -n old-k8s-version-415725: exit status 2 (320.051904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-415725 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-415725 -n old-k8s-version-415725
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-415725 -n old-k8s-version-415725
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-f44h4" [27041c7f-7746-4e44-8231-1ead7a7eddb7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002748718s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-762411 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-122362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-122362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (41.044317129s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-762411 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-762411 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-762411 --alsologtostderr -v=1: (1.153041119s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-762411 -n no-preload-762411
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-762411 -n no-preload-762411: exit status 2 (336.405962ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-762411 -n no-preload-762411
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-762411 -n no-preload-762411: exit status 2 (331.078977ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-762411 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-762411 --alsologtostderr -v=1: (1.109187203s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-762411 -n no-preload-762411
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-762411 -n no-preload-762411
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (22.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-640327 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-640327 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (22.696978788s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (22.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-308161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-308161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (42.906058544s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-640327 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-640327 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-640327 --alsologtostderr -v=3: (1.476864179s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-640327 -n newest-cni-640327
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-640327 -n newest-cni-640327: exit status 7 (142.870262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-640327 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-640327 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-640327 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (10.76439723s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-640327 -n newest-cni-640327
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-640327 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-640327 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-640327 -n newest-cni-640327
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-640327 -n newest-cni-640327: exit status 2 (316.04245ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-640327 -n newest-cni-640327
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-640327 -n newest-cni-640327: exit status 2 (310.460003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-640327 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-640327 -n newest-cni-640327
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-640327 -n newest-cni-640327
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-122362 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ef2c9ad1-1f32-4116-9d4c-41875110ad3f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ef2c9ad1-1f32-4116-9d4c-41875110ad3f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.00599522s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-122362 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (38.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-308161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1207 23:48:16.814762  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-308161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (38.501441268s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (38.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-122362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-122362 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-122362 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-122362 --alsologtostderr -v=3: (12.136612359s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-308161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-308161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (53.894002398s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-308161 "pgrep -a kubelet"
I1207 23:48:22.966548  375863 config.go:182] Loaded profile config "auto-308161": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-308161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h6bqz" [d9d3391d-02f3-429d-82de-6b2050705e59] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h6bqz" [d9d3391d-02f3-429d-82de-6b2050705e59] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004115584s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-122362 -n default-k8s-diff-port-122362
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-122362 -n default-k8s-diff-port-122362: exit status 7 (91.7994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-122362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-122362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-122362 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (48.322553051s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-122362 -n default-k8s-diff-port-122362
I1207 23:49:21.739116  375863 config.go:182] Loaded profile config "calico-308161": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-308161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-308161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-308161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-btxbp" [0ae6c4f3-45d5-4096-ad7c-f975b2605f77] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004441228s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-308161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-308161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (52.75945736s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-308161 "pgrep -a kubelet"
I1207 23:48:59.770862  375863 config.go:182] Loaded profile config "kindnet-308161": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-308161 replace --force -f testdata/netcat-deployment.yaml
I1207 23:49:00.434405  375863 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1207 23:49:00.728624  375863 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zqrvf" [cc2a7fd6-bfcf-4c13-a8aa-bc9704b01aad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zqrvf" [cc2a7fd6-bfcf-4c13-a8aa-bc9704b01aad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004105246s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-308161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-308161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-308161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-m5vnr" [37fd419f-dbec-4cc5-b42e-d173945b758f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00525704s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-308161 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-308161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tkg7h" [be77b302-99a8-4013-b2d8-f0a9e2319d85] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tkg7h" [be77b302-99a8-4013-b2d8-f0a9e2319d85] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004139532s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-64nd6" [ebf0448d-89b8-4e04-a103-ccaba4dd08cc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003677596s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-64nd6" [ebf0448d-89b8-4e04-a103-ccaba4dd08cc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003126919s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-122362 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-308161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-308161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m10.145370231s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-308161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-308161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-308161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-122362 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-122362 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-122362 -n default-k8s-diff-port-122362
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-122362 -n default-k8s-diff-port-122362: exit status 2 (336.499533ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-122362 -n default-k8s-diff-port-122362
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-122362 -n default-k8s-diff-port-122362: exit status 2 (357.0753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-122362 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-122362 -n default-k8s-diff-port-122362
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-122362 -n default-k8s-diff-port-122362
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-308161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1207 23:49:39.891015  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/addons-007439/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-308161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (50.07470119s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-308161 "pgrep -a kubelet"
I1207 23:49:48.226803  375863 config.go:182] Loaded profile config "custom-flannel-308161": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-308161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-djt67" [b0505561-cd6e-42ff-9885-501685ce2ab5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-djt67" [b0505561-cd6e-42ff-9885-501685ce2ab5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.13115805s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-308161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-308161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m3.32417415s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-308161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-308161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-308161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-lnzfz" [b8013bce-acc9-47ff-b5f8-d3926fa93e85] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003090487s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-308161 "pgrep -a kubelet"
I1207 23:50:35.534793  375863 config.go:182] Loaded profile config "flannel-308161": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-308161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vk5w5" [574ea3e5-2c51-4043-a2cc-ab96d9b51d01] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vk5w5" [574ea3e5-2c51-4043-a2cc-ab96d9b51d01] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00258121s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-308161 "pgrep -a kubelet"
I1207 23:50:39.790266  375863 config.go:182] Loaded profile config "enable-default-cni-308161": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-308161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rncz7" [2b908b2f-6507-4011-9593-f5a712af2a0e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rncz7" [2b908b2f-6507-4011-9593-f5a712af2a0e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003199501s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-308161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-308161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-308161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-308161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-308161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-308161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-308161 "pgrep -a kubelet"
I1207 23:50:56.954355  375863 config.go:182] Loaded profile config "bridge-308161": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-308161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-w5q4h" [0edcbd5a-f867-47d3-842d-6ed72089265d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-w5q4h" [0edcbd5a-f867-47d3-842d-6ed72089265d] Running
E1207 23:51:00.768035  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/old-k8s-version-415725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:51:00.774341  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/old-k8s-version-415725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:51:00.785756  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/old-k8s-version-415725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:51:00.807814  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/old-k8s-version-415725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:51:00.849867  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/old-k8s-version-415725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:51:00.931304  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/old-k8s-version-415725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:51:01.093505  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/old-k8s-version-415725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:51:01.415565  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/old-k8s-version-415725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:51:02.057412  375863 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/old-k8s-version-415725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004247268s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-308161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-308161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-308161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (33/418)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
156 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
260 TestGvisorAddon 0
282 TestImageBuild 0
283 TestISOImage 0
347 TestChangeNoneUser 0
350 TestScheduledStopWindows 0
352 TestSkaffold 0
366 TestStartStop/group/disable-driver-mounts 0.17
390 TestNetworkPlugins/group/kubenet 3.38
398 TestNetworkPlugins/group/cilium 3.76
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-059337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-059337
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-308161 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-308161

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-308161

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-308161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-308161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-308161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-308161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-308161

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-308161

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-308161

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-308161

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-308161

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-308161" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-308161" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:40:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-389929
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:40:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: running-upgrade-315368
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:40:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-477061
contexts:
- context:
cluster: kubernetes-upgrade-389929
user: kubernetes-upgrade-389929
name: kubernetes-upgrade-389929
- context:
cluster: running-upgrade-315368
user: running-upgrade-315368
name: running-upgrade-315368
- context:
cluster: stopped-upgrade-477061
user: stopped-upgrade-477061
name: stopped-upgrade-477061
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-389929
user:
client-certificate: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/kubernetes-upgrade-389929/client.crt
client-key: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/kubernetes-upgrade-389929/client.key
- name: running-upgrade-315368
user:
client-certificate: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/running-upgrade-315368/client.crt
client-key: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/running-upgrade-315368/client.key
- name: stopped-upgrade-477061
user:
client-certificate: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/stopped-upgrade-477061/client.crt
client-key: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/stopped-upgrade-477061/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-308161

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308161"

                                                
                                                
----------------------- debugLogs end: kubenet-308161 [took: 3.209499792s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-308161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-308161
--- SKIP: TestNetworkPlugins/group/kubenet (3.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-308161 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-308161" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-308161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-308161" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:40:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-389929
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:40:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: running-upgrade-315368
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-372312/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:40:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-477061
contexts:
- context:
cluster: kubernetes-upgrade-389929
user: kubernetes-upgrade-389929
name: kubernetes-upgrade-389929
- context:
cluster: running-upgrade-315368
user: running-upgrade-315368
name: running-upgrade-315368
- context:
cluster: stopped-upgrade-477061
user: stopped-upgrade-477061
name: stopped-upgrade-477061
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-389929
user:
client-certificate: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/kubernetes-upgrade-389929/client.crt
client-key: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/kubernetes-upgrade-389929/client.key
- name: running-upgrade-315368
user:
client-certificate: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/running-upgrade-315368/client.crt
client-key: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/running-upgrade-315368/client.key
- name: stopped-upgrade-477061
user:
client-certificate: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/stopped-upgrade-477061/client.crt
client-key: /home/jenkins/minikube-integration/22054-372312/.minikube/profiles/stopped-upgrade-477061/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-308161

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-308161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308161"

                                                
                                                
----------------------- debugLogs end: cilium-308161 [took: 3.573016159s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-308161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-308161
--- SKIP: TestNetworkPlugins/group/cilium (3.76s)

                                                
                                    
Copied to clipboard